Re: [zfs-discuss] query re disk mirroring

2012-11-30 Thread Enda O'Connor
On 29/11/2012 14:51, Edward Ned Harvey 
(opensolarisisdeadlongliveopensolaris) wrote:

From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Enda o'Connor - Oracle Ireland -

Say I have an ldoms guest that is using zfs root pool that is mirrored,
and the two sides of the mirror are coming from two separate vds
servers, that is
mirror-0
c3d0s0
c4d0s0

where c3d0s0 is served by one vds server, and c4d0s0 is served by
another vds server.

Now if for some reason, this physical rig loses power, then how do I
know which side of the mirror to boot off, ie which side is most recent.


If one storage host goes down, it should be no big deal, one side of the mirror 
becomes degraded, and later when it comes up again, it resilvers.

If one storage host goes down, and the OS continues running for a while and 
then *everything* goes down, later you bring up both sides of the storage, and 
bring up the OS, and the OS will know which side is more current because of the 
higher TXG.  So the OS will resilver the old side.

If one storage host goes down, and the OS continues running for a while and 
then *everything* goes down...  Later you bring up only one half of the 
storage, and bring up the OS.  Then the pool will refuse to mount, because with 
missing devices, it doesn't know if maybe the other side is more current.

As long as one side of the mirror disappears and reappears while the OS is 
still running, no problem.

As long as all the devices are present during boot, no problem.

Only problem is when you try to boot from one side of a broken mirror.  If you need to do 
this, you should mark the broken mirror as broken before shutting down - Certainly detach 
would do the trick.  Perhaps offline might also do the trick.


thanks, from my testing,ie appears that if disk goes into UNAVAIL state 
and further data is written to the other disk, then even if I boot from 
the stale side of mirror, the boot process detects this and actually 
mounts the good side and resilvers the side I passed to boot arg.
If disk is FAULTED then booting from it results in the zfs panicing and 
telling me to boot the other side.


So it appears that some failure modes are handled well, others appear to 
result in the panic loop.


I have both sides in boot-device and both disks are available to OBP at 
boot time in my testing.


I'm just trying to determine optimal value for autoboot in my ldoms 
guests in the face of various failure modes.


thanks for the info
Enda




Does that answer it?



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] query re disk mirroring

2012-11-29 Thread Enda o'Connor - Oracle Ireland - Software Engineer

Hi
Say I have an ldoms guest that is using zfs root pool that is mirrored, 
and the two sides of the mirror are coming from two separate vds 
servers, that is

mirror-0
  c3d0s0
  c4d0s0

where c3d0s0 is served by one vds server, and c4d0s0 is served by 
another vds server.


Now if for some reason, this physical rig loses power, then how do I 
know which side of the mirror to boot off, ie which side is most recent.


As an example ( contrived now mind you )

I shutdown the IO server domain serving c3d0s0, I then copy a large file 
into my root pool ( goes to vds serving c4d0s0 ), then shut down the 
guest and the other service domain gracefully, then boot guest off 
c3d0s0 ( have restarted the service domain there ), the large file is 
obviously missing now.


Is there any way if the guest is stopped, that I can know which side of 
the mirror to boot off that was most recent?

Enda
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] kernel panic during zfs import [UPDATE]

2012-04-17 Thread Enda O'Connor

On 17/04/2012 16:40, Carsten John wrote:

Hello everybody,

just to let you know what happened in the meantime:

I was able to open a Service Request at Oracle.

The issue is a known bug (Bug 6742788 : assertion panic at: zfs:zap_deref_leaf)

The bug has bin fixed (according to Oracle support) since build 164, but there 
is no fix for Solaris 11 available so far (will be fixed in S11U7?).

There is a workaround available that works (partly), but my system crashed 
again when trying to rebuild the offending zfs within the affected zpool.

At the moment I'm waiting for a so called interim diagnostic relief patch


so are you on s11, can I see pkg info entire

this bug is fixed in FCS s11 release, as that is 175b, and it got fixed 
in build 164. So if you have solaris 11 that CR is fixed.


In solaris 10 it is fixed in 147440-14/147441-14 ( sparc/x86 )


Enda



cu

Carsten



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Server upgrade

2012-02-15 Thread Enda O'Connor

On 15/02/2012 17:16, David Dyer-Bennet wrote:

While I'm not in need of upgrading my server at an emergency level, I'm
starting to think about it -- to be prepared (and an upgrade could be
triggered by a failure at this point; my server dates to 2006).

I'm actually more concerned with software than hardware.  My load is
small, the current hardware is handling it no problem.  I don't see myself
as a candidate for dedup, so I don't need to add huge quantities of RAM.
I'm handling compression on backups just fine (the USB external disks are
the choke-point, so compression actually speeds up the backups).

I'd like to be on a current software stream that I can easily update with
bug-fixes and new features.  The way I used to do that got broke in the
Oracle takeover.

I'm interested in encryption for my backups, if that's functional (and
safe) in current software versions.  I take copies off-site, so that's a
useful precaution.

Whatever I do, I'll of course make sure my backups are ALL up-to-date and
at least one is back off-site before I do anything drastic.

Is there an upgrade path from (I think I'm running Solaris Express) to
something modern?  (That could be an Oracle distribution, or the free
software fork, or some Nexenta distribution; my current data pool is 1.8T,
and I don't expect it to grow terribly fast, so the fully-featured free
version fits my needs for example.)  Upgrading might perhaps save me from
changing all the user passwords (half a dozen, not a huge problem) and
software packages I've added.

(uname -a says SunOS fsfs 5.11 snv_134 i86pc i386 i86pc).


so this is the last opensoalris release ( ie not Solaris express )
S11 express was build 151, this is older again.
Not sure if there is an upgrade path to express from opensolaris. I 
don't think there is.
And S11 itself is now the latest, it's based off build 175b. There is an 
upgrade patch from Express to S11, but not from opensolaris to Express 
if I remember correctly.


Or should I just export my pool and do a from-scratch install of
something?  (Then recreate the users and install any missing software.
I've got some cron jobs, too.)

AND, what something should I upgrade to or install?  I've tried a couple
of times to figure out the alternatives and it's never really clear to me
what my good options are.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] patching a solaris server with zones on zfs file systems

2012-01-21 Thread Enda O'Connor

Hi
Need more info here, what exactly is the root FS, ie zfs?
what kernel rev is current ( uname -a )
is there a specific patch that is being installed.

if so then Live upgrade is best bet, combined with perhaps recommended 
patch cluster.


apply latest rev of 119254 and 121430 ( SPARC ) or ( 121431 and 119254 
), then use lucreate to create new BE and the installpatchset script in 
recommended cluster to patch the ABE.


If Live upgrade is not an option, then I suggest going with recommended 
patch cluster still, it is well tested and the install script is very 
robust. Depending on what current kernel level is, zones might have to 
be halted if patching the live BE.


If doing tis manually, then apply latest rev of 119254/119255 ( 
SPARC/x86 ) patch utils patch first.



Enda
On 21/01/2012 10:46, bhanu prakash wrote:

Hi All,

Please let me know the procedure how to patch a server which is having 5
zones on zfs file systems.

Root file system exists on internal disk and zones are existed on SAN.

Thank you all,
Bhanu


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] S10 version question

2011-09-30 Thread Enda O'Connor

On 29/09/2011 23:59, Rich Teer wrote:

Hi all,

Got a quick question: what are the latest zpool and zfs versions
supported in Solaris 10 Update 10?

TIA,



root@pstx2200a # zfs upgrade -v
The following filesystem versions are supported:

VER  DESCRIPTION
---  
 1   Initial ZFS filesystem version
 2   Enhanced directory entries
 3   Case insensitive and File system unique identifier (FUID)
 4   userquota, groupquota properties
 5   System attributes

For more information on a particular version, including supported releases,
see the ZFS Administration Guide.

root@pstx2200a # zpool upgrade -a
This system is currently running ZFS pool version 29.

All pools are formatted using this version.
root@pstx2200a # cat /etc/release
Oracle Solaris 10 8/11 s10x_u10wos_17b X86
  Copyright (c) 1983, 2011, Oracle and/or its affiliates. All rights 
reserved.

Assembled 23 August 2011
root@pstx2200a #
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] not sure how to make filesystems

2011-05-31 Thread Enda O'Connor

On 29/05/2011 19:55, BIll Palin wrote:

I'm migrating some filesystems from UFS to ZFS and I'm not sure how to create a 
couple of them.

I want to migrate /, /var, /opt, /export/home and also want swap and /tmp.  I 
don't care about any of the others.

The first disk, and the one with the UFS filesystems, is c0t0d0 and the 2nd 
disk is c0t1d0.

I've been told that /tmp is supposed to be part of swap.  So far I have:

lucreate -m /:/dev/dsk/c0t0d0s0:ufs -m /var:/dev/dsk/c0t0d0s3:ufs -m 
/export/home:/dev/dsk/c0t0d0s5:ufs -m /opt:/dev/dsk/c0t0d0s4:ufs -m 
-:/dev/dsk/c0t1d0s2:swap -m /tmp:/dev/dsk/c0t1d0s3:swap-n zfsBE -p rootpool

And then set quotas for them.  Is this right?

Hi
So zfs root is very different, one cannot have a mix of ufs + zvol based 
swap at all.

and lucreate is a bit restricted, one cannot split out /var.

The only one that works is
lucreate -n zfsBE -p rpool

where rpool is an SMI based pool.
To check for SMI run format, select the rpool disk and p, p, then check 
if it lists cylinders ( SMI ), if not run format -e on the disk and 
label ( delete rpool first if it all ready exists ), then preferrably ( 
but not necessary ), put all space in slice 0 say ( so that rpool has 
the whole disk ).


Post boot of zfsBE, one can modify the swap and dump zvols ( look on 
google for zfs root swap ).


Enda
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Prefetch Tuning

2010-12-09 Thread Enda O'Connor

Hi
I'd certainly look at the sql being run, examine the explain plan and in 
particular SQL_TRACE, TIMED_STATISTICS, and TKPROF, these will really 
highlight issues.


see following for autotrace which can generate explain plan etc.

http://download.oracle.com/docs/cd/B10500_01/server.920/a96533/autotrac.htm


 then the following can really help
SQLalter session set sql_trace=true;
run sql
SQLalter session set sql_trace=false  ( this si very important as it 
closes the trace session )

SQLshow parameters show parameters user_dump_dest
 location of output from sql trace

go to user dump dest
you wills ee somethign like
${ORACLE_SID}_ora_6919.trc
tkprof  ${ORACLE_SID}_ora_6919.trc 6919.trc explain=scott/tiger sys=no

ie explain=schema owner and passwrd, if unsure just run
tkprof  ${ORACLE_SID}_ora_6919.trc 6919.trc

this can provide some very informative info, ie unseen ora errors from 
user functions and so on.



read the following to get an idea of how to get at the problematic SQL
http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/sql_1016.htm#i26072

I had an interesting issue the other day, where a tablespace was nearing 
100% full on a test DB that isn't properly monitored, and queries stated 
to run really really slow.



Enda


On 09/12/2010 20:22, Jabbar wrote:

Hello Tony,

If the hardware hasn't changed I'd look at the workload on the database
server. If the customer is taking regular statspack snapshots they might
be able to see whats causing the extra activity. They can use AWR or the
diagnostic pack, if they are licensed, to see the offending SQL or
PL/SQL or any hot objects.

However if you want to tune at the ZFS level then the following has some
advice for ZFS and databases
http://www.solarisinternals.com/wiki/index.php/ZFS_for_Databases.

On 9 December 2010 15:48, Tony Marshall tony.marsh...@oracle.com
mailto:tony.marsh...@oracle.com wrote:

Hi All,

Is there a way to tune the zfs prefetch on a per pool basis?  I have
a customer that is seeing slow performance on a pool the contains
multiple tablespaces from an Oracle database, looking at the LUNs
associated to that pool they are constantly at 80% - 100% busy.
Looking at the output from arcstat for the miss % on data, prefetch
and metadata we are getting around 5 - 10 % on data, 50 - 70 % on
prefetch and 0% on metadata.  I am thinking that the majority of the
prefetch misses are due to the tablespace data files.

The configuration of the system is as follows

Sun Fire X4600 M2 8 x 2.3 GHz Quad Core Processor, 256GB Memory
Solaris 10 Update 7
ZFS Arc cache max set to 85GB
4 Zpools configured from a 6540 Storage array

* apps - single LUN (raid 5) recordsize set to 128k, from the
  array, pool contains binaries and application files
* backup - 8 LUNs (varying sizes all from a 6180 array with SATA
  disks) used for storing oracle dumps
* data - 5 LUNs (Raid 10  6 physical drives) recordsize set to
  8k, used for Oracle data files
* logs - single LUN (raid 10 from 6 physical drives) recordsize
  set to 128k, used for Oracle redo log files, temp db, undo db
  and control files.

18 Solaris 10 zones, of which 12 of these are oracle zones sharing
the data and logs pools.

I think that the prefetch will be useful on the apps and backup
pools, however I think that on the data and logs pools this could be
causing issues with the amount of IO that is being caused by the
prefetch and the amount that it is missing in the arcstats could be
the reason why the devices are at 100% busy.  Is there a way to turn
the prefetch off for just a single pool? Also is this something that
can be done online or will it require a reboot to put into effect.

Thanks in advance for your assistance in this matter.

Regards
Tony
--
Oracle http://www.oracle.com
Tony Marshall | Technical Architect
Phone: +44 118 924 9516 tel:+44%20118%20924%209516 | | | Mobile:
+44 7765 898570 tel:+44%207765%20898570
Oracle Remote Operations Management
United Kingdom

ORACLE Corporation UK Ltd is a company incorporated in England 
Wales | Company Reg. No. 1782505 | Reg. office: Oracle Parkway,
Thames Valley Park, Reading RG6 1RA
Green Oracle http://www.oracle.com/commitment Oracle is committed
to developing practices and products that help protect the environment

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org mailto:zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




--
Thanks

  A Jabbar Azam



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list

Re: [zfs-discuss] ZFS cache inconsistencies with Oracle

2010-10-15 Thread Enda O'Connor

Hi
so to be absolutely clear
in the same session, you ran an update, commit and select, and the 
select returned an earlier value than the committed update?


Things like
ALTER SESSION set ISOLATION_LEVEL = SERIALIZABLE;

will cause a session to NOT see commits from other sessions, but in 
Oracle one always sees one updates in ones transactions. ( assuming no 
other session makes a change of course )


So are you sure that
1 come other session hasn't mucked with the value between the commit and 
the select in your session.

2 some DB trigger is doing this perhaps, ie setting some default value?

In my experience with DB's, triggers are the root of all evil.

Enda
On 15/10/2010 14:36, Gerry Bragg wrote:

A customer is running ZFS version15 on Solaris SPARC 10/08 supporting
Oracle 10.2.0.3 databases in a dev and production test environment. We
have come across some cache inconsistencies with one of the Oracle
databases where fetching a record displays a 'historical value' (that
has been changed and committed many times). This is an isolated
occurance and is not always consistent. I can't replicate it to other
tables. I'll also be posting a note to the ZFS discussion list.

Is it possible for a read to bybpass the write cache and fetch from disk
before the flush of the cache to disk occurs? This is a large system
that is infrequently busy. The Oracle SGA size is minimized to 1GB per
instance and we rely more on the ZFS cache, allowing us to fit ‘more
instances’ (many of which are cloned snapshots). We’ve been running this
setup for 2 years. The filesystems are set with compression on,
blocksize 8k for oracle datafiles, 128k for redologs.

Here are the details of the scenerio:

1. Update statement re-setting existing value. At this point the
previous value was actually set to -643 prior to the update. It was
originally set to 3 before today’s session:

SQL update [name deleted] set status_cd = 1 where id = 65;

1 row updated.

SQL commit;

Commit complete.

SQL select rowid, id, status_cd from [table name deleted]

SQL where id = 65;

ROWID ID STATUS_CD

-- -- --

AAAq/DAAERlAAM 65 3

Note that when retrieved the status_cd reverts to the old original value
of 3, not the previous value of -643.

2. Oracle trace file proves that the update was issued and committed:

=

PARSING IN CURSOR #1 len=70 dep=0 uid=110 oct=6 lid=110
tim=17554807027344 hv=3512595279 ad='fd211878'

update [table deleted] set status_cd = 1 where id = 65 END OF STMT PARSE
#1:c=0,e=54,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=2,tim=17554807027340

BINDS #1:

EXEC #1:c=0,e=257,p=0,cr=3,cu=3,mis=0,r=1,dep=0,og=2,tim=17554807027737

WAIT #1: nam='SQL*Net message to client' ela= 2 driver id=1413697536
#bytes=1 p3=0 obj#=-1 tim=17554807027803 WAIT #1: nam='SQL*Net message
from client' ela= 2999139 driver id=1413697536 #bytes=1 p3=0 obj#=-1
tim=17554810026992 STAT #1 id=1 cnt=1 pid=0 pos=1 obj=0 op='UPDATE
[TABLE DELETED] (cr=3 pr=0 pw=0 time=144 us)'

STAT #1 id=2 cnt=1 pid=1 pos=1 obj=177738 op='INDEX UNIQUE SCAN
[TABLE_DELETED]_XPK (cr=3 pr=0 pw=0 time=19 us)'

PARSE #2:c=0,e=9,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,tim=17554810027367

XCTEND rlbk=0, rd_only=0

EXEC #2:c=0,e=226,p=0,cr=0,cu=1,mis=0,r=0,dep=0,og=0,tim=17554810027630

WAIT #2: nam='log file sync' ela= 833 buffer#=9408 p2=0 p3=0 obj#=-1
tim=17554810028507 WAIT #2: nam='SQL*Net message to client' ela= 2
driver id=1413697536 #bytes=1 p3=0 obj#=-1 tim=17554810028578 WAIT #2:
nam='SQL*Net message from client' ela= 1825185 driver id=1413697536
#bytes=1 p3=0 obj#=-1 tim=17554811853812 = PARSING
IN CURSOR #1 len=67 dep=0 uid=110 oct=3 lid=110 tim=17554811854015
hv=1593702413 ad='fd713640'

select status_cd from [table_deleted] where id = 65 END OF STMT PARSE
#1:c=0,e=41,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=2,tim=17554811854010

BINDS #1:

EXEC #1:c=0,e=91,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=2,tim=17554811854273

WAIT #1: nam='SQL*Net message to client' ela= 1 driver id=1413697536
#bytes=1 p3=0 obj#=-1 tim=17554811854327 FETCH
#1:c=0,e=64,p=0,cr=4,cu=0,mis=0,r=1,dep=0,og=2,tim=17554811854436

WAIT #1: nam='SQL*Net message from client' ela= 780 driver id=1413697536
#bytes=1 p3=0 obj#=-1 tim=17554811855291 FETCH
#1:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,tim=17554811855331

WAIT #1: nam='SQL*Net message to client' ela= 0 driver id=1413697536
#bytes=1 p3=0 obj#=-1 tim=17554811855366

There are no Oracle or Solaris error messages indicating any issue with
this update. Haas anyone seen this behavoir?

The features of ZFS (snapshots/clones/compression) save us a ton of time
on this platform and we have certainly benefited from it. Just want to
understand how something like this could occur and determine how we can
prevent it in the future.

==

Gerry Bragg

Sr. Developer

Altarum Institute

(734) 516-0825

gerry.br...@altarum.org mailto:gerry.br...@altarum.org

www.altarum.org http://www.altarum.org/

Systems Research 

Re: [zfs-discuss] ZFS flash issue

2010-09-28 Thread Enda O'Connor

On 28/09/2010 10:20, Ketan wrote:

I have created a solaris9 zfs root flash archive for sun4v environment which i 
'm tryin to use for upgrading solaris10 u8 zfs root based server using live 
upgrade.

one cannot use zfs flash archive with luupgrade, that is with zfs root a 
flash archive archvies the entire root pool ( there is a -D to exclude 
datasets ), and can only be installed via jumpstart. There is no way to 
provision a BE using a flash archive yet, or at least not that I'm aware of.


Enda



following is my current system status


lustatus
Boot Environment   Is   Active ActiveCanCopy
Name   Complete NowOn Reboot Delete Status
--  -- - -- --
zfsBE  yes  yesyes   no -
zfsBEu9yes  no noyes-



when i try to upgrade the with luupgrade i get following error



luupgrade -f -n zfsBEu9 -s /mnt -a /flash/zfsBEu9.flar

63521 blocks
miniroot filesystem islofs
Mounting miniroot at/mnt/Solaris_10/Tools/Boot
Validating the contents of the media/mnt.
The media is a standard Solaris media.
Validating the contents of the miniroot/mnt/Solaris_10/Tools/Boot.
Locating the flash install program.
Checking for existence of previously scheduled Live Upgrade requests.
Constructing flash profile to use.
Creating flash profile for BEzfsBEu9.
Performing the operating system flash install of the BEzfsBEu9.
CAUTION: Interrupting this process may leave the boot environment unstable or 
unbootable.
ERROR: The flash install failed; pfinstall returned these diagnostics:

ERROR: Field 2 - Invalid disk name (zfsBEu9)
The Solaris flash install of the BEzfsBEu9  failed.


What could be the reason for this .. is there anything i 'm not doin k ?


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help, my zone's dataset has disappeared!

2010-02-26 Thread Enda O'Connor

On 26/02/2010 14:03, Jesse Reynolds wrote:

Hello

I have an amd64 server running OpenSolaris 2009-06. In December I created one 
container on this server named 'cpmail' with it's own zfs dataset and it's been 
running ever since. Until earlier this evening when the server did a kernel 
panic and rebooted. Now, I can't see any contents in the zfs dataset for this 
zone!

The server has two disks which are root mirrored with ZFS:

# zpool status
   pool: rpool
  state: ONLINE
  scrub: none requested
config:

 NAME  STATE READ WRITE CKSUM
 rpool ONLINE   0 0 0
   mirror  ONLINE   0 0 0
 c8t0d0s0  ONLINE   0 0 0
 c8t1d0s0  ONLINE   0 0 0

errors: No known data errors

Here are the datasets:

# zfs list
NAME  USED  AVAIL  REFER  MOUNTPOINT
rpool 161G  67.6G  79.5K  /rpool
rpool/ROOT   3.66G  67.6G19K  legacy
rpool/ROOT/opensolaris   3.66G  67.6G  3.51G  /
rpool/cpmail  139G  67.6G22K  /zones/cpmail
rpool/cpmail/ROOT 139G  67.6G19K  legacy
rpool/cpmail/ROOT/zbe 139G  67.6G   139G  legacy
rpool/dump   2.00G  67.6G  2.00G  -
rpool/export 7.64G  67.6G  7.49G  /export
rpool/export/home 150M  67.6G21K  /export/home
rpool/export/home/jesse   150M  67.6G   150M  /export/home/jesse
rpool/repo   6.56G  67.6G  6.56G  /rpool/repo
rpool/swap   2.00G  69.4G   130M  -

/zones/cpmail is where it should be mounting the zone's dataset, I believe.

Here's what happens when I try and start the zone:

# zoneadm -z cpmail boot
could not verify zfs dataset mailtmp: dataset does not exist
zoneadm: zone cpmail failed to verify


So the zone is trying to find a dataset 'mailtmp' and failing because it 
doesn't exist. So, what happened to it?

Here's the zone config file, at /etc/zones/cpmail.xml (with IP address 
obfuscated)

# cat /etc/zones/cpmail.xml
?xml version=1.0 encoding=UTF-8?
!DOCTYPE zone PUBLIC -//Sun Microsystems Inc//DTD Zones//EN 
file:///usr/share/lib/xml/dtd/zonecfg.dtd.1
!--
 DO NOT EDIT THIS FILE.  Use zonecfg(1M) instead.
--
zone name=cpmail zonepath=/zones/cpmail autoboot=false brand=ipkg
   network address=xxx.xxx.xxx.xxx physical=bge1/
   dataset name=mailtmp/
/zone


not sure if above looks correct to me, surely this should be 
rpool/mailtmp, assuming you don't have other pools it might live in. ( 
what does zpool import say by the way )


Did this get added to a running zone, and then fail on reboot perhaps,ie 
to me this never worked.


Enda


I just don't understand where the dataset 'mailtmp' went to.  Perhaps it was an 
initial name I used for the dataset and I then renamed it to cpmail, but then I 
can't see any of the zones files in /zones/cpmail :

# find /zones/cpmail/
/zones/cpmail/
/zones/cpmail/dev
/zones/cpmail/root

Does ZFS store a log file of all operations applied to it? It feels like 
someone has gained access and run 'zfs destroy mailtmp' to me, but then again 
it could just be my own ineptitude.

Thank you
Jesse


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Live Upgrade Solaris 10 UFS to ZFS boot pre-requisites?

2009-12-08 Thread Enda O'Connor

Hi
the live upgrade info doc
http://sunsolve.sun.com/search/document.do?assetkey=1-61-206844-1
has all the relevant patch, if you are on u6 KU or higher ( you are on 
u8 ), then you can just migrate straight to zfs, so there is no need to 
upgrade to u8 ufs, in order to move to u8 zfs, the u6 KU delivers the 
sparc new boot for zfs boot etc, jsut make sure you take the very latest 
live upgrade patch 121430-43



Enda
Bob Friesenhahn wrote:
I have a Solaris 10 U5 system massively patched so that it supports ZFS 
pool version 15 (similar to U8, kernel Generic_141445-09), live upgrade 
components have been updated to Solaris 10 U8 versions from the DVD, and 
GRUB has been updated to support redundant menus across the UFS boot 
environments.


I have studied the Solaris 10 Live Upgrade manual (821-0438) and am 
unable to find any statement which requires/suggests that I live upgrade 
to U8 with UFS boot before live upgrading to ZFS boot but the page at 
http://docs.sun.com/app/docs/doc/819-5461/ggpdm?a=view recommends that 
this process should be used.  The two documents do not seem to agree.


Given that my system is essentially equivalent to U8, is there any 
reason to live upgrade to UFS U8 prior to ZFS U8 or can the more direct 
path be used?


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool version

2009-12-04 Thread Enda O'Connor

dick hoogendijk wrote:

OpenSolaris-b128a has zfs version 22 w/ deduplication.
Do I need to update older pools to take advantage of this dedup or can I
just create a new zfs filesystem with this option?



it's pool wide, so a zpool upgrade is necessary, or else create a new pool.

cheers
Enda
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dedup question

2009-11-02 Thread Enda O'Connor
it works at a pool wide level with the ability to exclude at a dataset 
level, or the converse, if set to off at top level dataset can then set 
lower level datasets to on, ie one can include and exclude depending on 
the datasets contents.


so largefile will get deduped in the example below.

Enda

Breandan Dezendorf wrote:

Does dedup work at the pool level or the filesystem/dataset level?
For example, if I were to do this:

bash-3.2$ mkfile 100m /tmp/largefile
bash-3.2$ zfs set dedup=off tank
bash-3.2$ zfs set dedup=on tank/dir1
bash-3.2$ zfs set dedup=on tank/dir2
bash-3.2$ zfs set dedup=on tank/dir3
bash-3.2$ cp /tmp/largefile /tank/dir1/largefile
bash-3.2$ cp /tmp/largefile /tank/dir2/largefile
bash-3.2$ cp /tmp/largefile /tank/dir3/largefile

Would largefile get dedup'ed?  Would I need to set dedup on for the
pool, and then disable where it isn't wanted/needed?

Also, will we need to move our data around (send/recv or whatever your
preferred method is) to take advantage of dedup?  I was hoping the
blockpointer rewrite code would allow an admin to simply turn on dedup
and let ZFS process the pool, eliminating excess redundancy as it
went.



--
Enda O'Connor x19781  Software Product Engineering
Patch System Test : Ireland : x19781/353-1-8199718
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dedupe is in

2009-11-02 Thread Enda O'Connor

James Lever wrote:


On 03/11/2009, at 7:32 AM, Daniel Streicher wrote:

But how can I update my current OpenSolaris (2009.06) or Solaris 10 
(5/09) to use this.

Or have I wait for a new stable release of Solaris 10 / OpenSolaris?


For OpenSolaris, you change your repository and switch to the 
development branches - should be available to public in about 3-3.5 
weeks time.  Plenty of instructions on how to do this on the net and in 
this list.


For Solaris, you need to wait for the next update release.
at which stage a patch ( kernel Patch ) will be released that can be 
applied to pre update 9 releases to get the latest zpool version, 
existing pools would require a zpool upgrade.


Enda


cheers,
James


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Strange problem with liveupgrade on zfs (10u7 and u8)

2009-10-21 Thread Enda O'Connor

Hi
This looks ok to me, message but not an indicator of an issue

could you post
cat /etc/lu/ICF.1
cat /etc/ICF.2 ( the foobar Be )

also lumount foobar /a
and cat /a/etc/vfstab


Enda

Mark Horstman wrote:

I'm seeing the same [b]lucreate[/b] error on my fresh SPARC sol10u8 install 
(and my SPARC sol10u7 machine I keep patches up to date), but I don't have a 
separate /var:

# zfs list
NAMEUSED  AVAIL  REFER  MOUNTPOINT
pool00 3.36G   532G20K  none
pool00/global  3.51M   532G20K  none
pool00/global/appl   20K   532G20K  /appl
pool00/global/home  324K   532G   324K  /home
pool00/global/local  26K   532G26K  /usr/local
pool00/global/patches  3.13M   532G  3.13M  /usr/local/patches
pool00/shared  3.35G   532G20K  none
pool00/shared/install  2.52G   532G  2.52G  /install
pool00/shared/local 849M   532G   849M  /opt/local
rpool  44.6G  89.2G97K  /rpool
rpool/ROOT 4.63G  89.2G21K  legacy
rpool/ROOT/sol10u8 4.63G  89.2G  4.63G  /
rpool/dump 8.01G  89.2G  8.01G  -
rpool/swap   32G   121G16K  -

# lucreate -n foobar
Analyzing system configuration.
Comparing source boot environment sol10u8 file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment foobar.
Source boot environment is sol10u8.
Creating boot environment foobar.
Cloning file systems from boot environment sol10u8 to create boot environment 
foobar.
Creating snapshot for rpool/ROOT/sol10u8 on rpool/ROOT/sol1...@foobar.
Creating clone for rpool/ROOT/sol1...@foobar on rpool/ROOT/foobar.
Setting canmount=noauto for / in zone global on rpool/ROOT/foobar.
WARNING: split filesystem / file system type zfs cannot inherit
mount point options - from parent filesystem / file
type - because the two file systems have different types.
Population of boot environment foobar successful.
Creation of boot environment foobar successful.

# cat /etc/vfstab
#device device  mount   FS  fsckmount   mount
#to mount   to fsck point   typepassat boot options
#
fd  -   /dev/fd fd  -   no  -
/proc   -   /proc   proc-   no  -
/dev/zvol/dsk/rpool/swap-   -   swap-   no  -
/devices-   /devicesdevfs   -   no  -
sharefs -   /etc/dfs/sharetab   sharefs -   no  -
ctfs-   /system/contractctfs-   no  -
objfs   -   /system/object  objfs   -   no  -
swap-   /tmptmpfs   -   yes -

I don't see anything wrong with my /etc/vfstab. Until I get this resolved, I'm 
afraid to patch and use the new BE.


--
Enda O'Connor x19781  Software Product Engineering
Patch System Test : Ireland : x19781/353-1-8199718
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Strange problem with liveupgrade on zfs (10u7 and u8)

2009-10-21 Thread Enda O'Connor

Hi T
his will  boot ok in my opinion, not seeing any issues there.

Enda
Mark Horstman wrote:

more input:

# lumount foobar /mnt
/mnt

# cat /mnt/etc/vfstab
# cat /mnt/etc/vfstab
#live-upgrade:Wed Oct 21 09:36:20 CDT 2009 updated boot environment foobar
#device device  mount   FS  fsckmount   mount
#to mount   to fsck point   typepassat boot options
#
fd  -   /dev/fd fd  -   no  -
/proc   -   /proc   proc-   no  -
/dev/zvol/dsk/rpool/swap-   -   swap-   no  -
/devices-   /devicesdevfs   -   no  -
sharefs -   /etc/dfs/sharetab   sharefs -   no  -
ctfs-   /system/contractctfs-   no  -
objfs   -   /system/object  objfs   -   no  -
swap-   /tmptmpfs   -   yes -
rpool/ROOT/foobar   -   /   zfs 1   no  -


So I'm guessing the '/' entry has to be removed.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Strange problem with liveupgrade on zfs (10u7 and u8)

2009-10-21 Thread Enda O'Connor

Mark Horstman wrote:

Then why the warning on the lucreate. It hasn't done that in the past.
this is from the vfstab processing code in ludo.c, in your case not 
causing any issue, but shall be fixed.


Enda


Mark

On Oct 21, 2009, at 12:41 PM, Enda O'Connor enda.ocon...@sun.com wrote:


Hi T
his will  boot ok in my opinion, not seeing any issues there.

Enda
Mark Horstman wrote:

more input:
# lumount foobar /mnt
/mnt
# cat /mnt/etc/vfstab
# cat /mnt/etc/vfstab
#live-upgrade:Wed Oct 21 09:36:20 CDT 2009 updated boot environment 
foobar
#device device  mount   FS  fsck
mount   mount
#to mount   to fsck point   typepassat 
boot options

#
fd  -   /dev/fd fd  -   no  -
/proc   -   /proc   proc-   no  -
/dev/zvol/dsk/rpool/swap-   -   swap-   
no  -

/devices-   /devicesdevfs   -   no  -
sharefs -   /etc/dfs/sharetab   sharefs -   no  -
ctfs-   /system/contractctfs-   no  -
objfs   -   /system/object  objfs   -   no  -
swap-   /tmptmpfs   -   yes -
rpool/ROOT/foobar   -   /   zfs 1   no  -
So I'm guessing the '/' entry has to be removed.




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Strange problem with liveupgrade on zfs (10u7 and u8)

2009-10-21 Thread Enda O'Connor

Hi
Yes sorry remove that line from vfstab in the new BE

Enda
Mark wrote:
Ok. Thanks. Why does '/' show up in the newly created /BE/etc/vfstab but 
not in the current /etc/vfstab? Should '/' be in the /BE/etc/vfstab?


btw, thank you for responding so quickly to this.

Mark

On Wed, Oct 21, 2009 at 12:49 PM, Enda O'Connor enda.ocon...@sun.com 
mailto:enda.ocon...@sun.com wrote:


Mark Horstman wrote:

Then why the warning on the lucreate. It hasn't done that in the
past.

this is from the vfstab processing code in ludo.c, in your case not
causing any issue, but shall be fixed.

Enda


Mark

On Oct 21, 2009, at 12:41 PM, Enda O'Connor
enda.ocon...@sun.com wrote:

Hi T
his will  boot ok in my opinion, not seeing any issues there.

Enda
Mark Horstman wrote:

more input:
# lumount foobar /mnt
/mnt
# cat /mnt/etc/vfstab
# cat /mnt/etc/vfstab
#live-upgrade:Wed Oct 21 09:36:20 CDT 2009 updated
boot environment foobar
#device device  mount   FS
 fsckmount   mount
#to mount   to fsck point   type  
 passat boot options

#
fd  -   /dev/fd fd  -   no  -
/proc   -   /proc   proc-   no  -
/dev/zvol/dsk/rpool/swap-   -   swap  
 -   no  -
/devices-   /devicesdevfs   -  
no  -
sharefs -   /etc/dfs/sharetab   sharefs -  
no  -
ctfs-   /system/contractctfs-  
no  -

objfs   -   /system/object  objfs   -   no  -
swap-   /tmptmpfs   -   yes -
rpool/ROOT/foobar   -   /   zfs 1  
no  -

So I'm guessing the '/' entry has to be removed.






___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Strange problem with liveupgrade on zfs (10u7 and u8)

2009-10-15 Thread Enda O'Connor

Hi
This is 6884728 which is a regression from 6837400.
the workaround is as you done, remove the lines from vfstab

Enda


Brian wrote:
I am have a strange problem with liveupgrade of ZFS boot environment.   I found a similar discussion on the zones-discuss, but, this happens for me on installs with and without zones, so I do not think it is related to zones.  I have been able to reproduce this on both sparc (ldom) and x86 (phsyical).   I was originally trying to luupdate to u8, but, this is easily reproducible with 3 simple steps: lucreate, luactivate, reboot.  

I have a fairly simple install of Solaris 10 u7 with no BE defined.  
Very recent 10 recommended cluster applied.  
121430-42 is present.
Kernel is 141414-10.   
Installed the lu utilities from the Solaris 10 u8 10/09 dvd.  
ZFS root.

/var on a separate dataset.

[b]lucreate -n sol10alt[/b]
Noticed the following warning during lucreate: WARNING: split filesystem / file system type 
zfs cannot inherit mount point options - from parent filesystem / file type - 
because the two file systems have different types.

[b]luactivate sol10alt[/b]

[b]/usr/sbin/shutdown -g0 -i6 -y[/b]

Boot device: /virtual-devi...@100/channel-devi...@200/d...@1:a  File and args:
SunOS Release 5.10 Version Generic_141414-10 64-bit snip
Hostname: SOL10WE001
ERROR: svc:/system/filesystem/minimal:default failed to mount /var  (see 'svcs 
-x' for details)
Oct 14 23:59:48 svc.startd[7]: svc:/system/filesystem/minimal:default: Method 
/lib/svc/method/fs-minimal failed with exit status 95.
Oct 14 23:59:48 svc.startd[7]: system/filesystem/minimal:default failed 
fatally: transitioned to maintenance (see 'svcs -xv' for details)
Requesting System Maintenance Mode
(See /lib/svc/share/README for more information.)
Console login service(s) cannot run

Root password for system maintenance (control-d to bypass):

[b]cat /etc/svc/volatile/system-filesystem-minimal:default.log[/b]
[ Oct 14 16:17:19 Enabled. ]
[ Oct 14 16:17:33 Executing start method (/lib/svc/method/fs-minimal) ]
ERROR: /sbin/mount -O -F zfs   /var failed, err=1
filesystem 'rpool/ROOT/sol10u8BE/var' cannot be mounted using 'mount -F zfs' 
Use 'zfs set mountpoint=/var' instead. If you must use 'mount -F zfs' or 
/etc/vfstab, use 'zfs set mountpoint=legacy'. See zfs(1M) for more information.
[ Oct 14 16:17:33 Method start exited with status 95 ]

This appears to be easily fixed by logging in, removing the last two lines of 
vfstab :
rpool/ROOT/sol10alt -   /   zfs 1   no  -
rpool/ROOT/sol10alt/var -   /varzfs 1   no  -
and rebooting.   The new BE then appears to be fine.I don't know if there are any further ramifications that will appear later, nor why this is happening exactly.  



[b]lucreate output:[/b]
Analyzing system configuration.
Comparing source boot environment root file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment sol10alt.
Source boot environment is root.
Creating boot environment sol10alt.
Cloning file systems from boot environment root to create boot environment 
sol10alt.
Creating snapshot for rpool/ROOT/root on rpool/ROOT/r...@sol10alt.
Creating clone for rpool/ROOT/r...@sol10alt on rpool/ROOT/sol10alt.
Setting canmount=noauto for / in zone global on rpool/ROOT/sol10alt.
Creating snapshot for rpool/ROOT/root/var on rpool/ROOT/root/v...@sol10alt.
Creating clone for rpool/ROOT/root/v...@sol10alt on rpool/ROOT/sol10alt/var.
Setting canmount=noauto for /var in zone global on 
rpool/ROOT/sol10alt/var.
WARNING: split filesystem / file system type zfs cannot inherit
mount point options - from parent filesystem / file
type - because the two file systems have different types.
Population of boot environment sol10alt successful.
Creation of boot environment sol10alt successful.

[b]luactivate sol10alt output[/b]
A Live Upgrade Sync operation will be performed on startup of boot environment 
sol10alt.
** snip
Modifying boot archive service
Activation of boot environment sol10alt successful.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris License with ZFS USER quotas?

2009-09-28 Thread Enda O'Connor

Hi
Yes Solaris 10/09 ( update 8 ) will contain
6501037 want user/group quotas on zfs

it should be out within a few weeks.

So if they have zpools already installed they can apply 
141444-09/141445-09 ( 10/09 kernel patch ) and post reboot run zpool 
upgrade to go to zpool version 15 ( the process is non reversible by the 
ay ), which contains 6501037. The patches mentioned will be released 
shortly after 10/09 itself ships ( within a few days of 10/09 shipping 
), if applying patches make sure to apply latest rev of 119254/119255 
first ( the patch utilities patch ), and read the README as well for any 
further instructions.


Enda

Tomas Ögren wrote:

On 28 September, 2009 - Jorgen Lundman sent me these 1,7K bytes:


Hello list,

We are unfortunately still experiencing some issues regarding our support 
license with Sun, or rather our Sun Vendor.


We need ZFS User quotas. (That's not the zfs file-system quota) which 
first appeared in svn_114.


We would like to run something like svn_117 (don't really care which 
version per-se, that is just the one version we have done the most 
testing with).


But our Vendor will only support Solaris 10. After weeks of wrangling, 
they have reluctantly agreed to let us run OpenSolaris 2009.06. (Which 
does not have ZFS User quotas).


When I approach Sun-Japan directly I just get told that they don't speak  
English.  When my Japanese colleagues approach Sun-Japan directly, it is  
suggested to us that we stay with our current Vendor.


* Will there be official Solaris 10, or OpenSolaris releases with ZFS 
User quotas? (Will 2010.02 contain ZFS User quotas?)


http://sparcv9.blogspot.com/2009/08/solaris-10-update-8-1009-is-comming.html
which is in no way official, says it'll be in 10u8 which should be
coming within a month.

/Tomas


--
Enda O'Connor x19781  Software Product Engineering
Patch System Test : Ireland : x19781/353-1-8199718
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris License with ZFS USER quotas?

2009-09-28 Thread Enda O'Connor

Hi
So ship date is 19th October for Solaris 10 10/09 ( update 8 ).

Enda

Enda O'Connor wrote:

Hi
Yes Solaris 10/09 ( update 8 ) will contain
6501037 want user/group quotas on zfs

it should be out within a few weeks.

So if they have zpools already installed they can apply 
141444-09/141445-09 ( 10/09 kernel patch ) and post reboot run zpool 
upgrade to go to zpool version 15 ( the process is non reversible by the 
ay ), which contains 6501037. The patches mentioned will be released 
shortly after 10/09 itself ships ( within a few days of 10/09 shipping 
), if applying patches make sure to apply latest rev of 119254/119255 
first ( the patch utilities patch ), and read the README as well for any 
further instructions.


Enda

Tomas Ögren wrote:

On 28 September, 2009 - Jorgen Lundman sent me these 1,7K bytes:


Hello list,

We are unfortunately still experiencing some issues regarding our 
support license with Sun, or rather our Sun Vendor.


We need ZFS User quotas. (That's not the zfs file-system quota) which 
first appeared in svn_114.


We would like to run something like svn_117 (don't really care which 
version per-se, that is just the one version we have done the most 
testing with).


But our Vendor will only support Solaris 10. After weeks of 
wrangling, they have reluctantly agreed to let us run OpenSolaris 
2009.06. (Which does not have ZFS User quotas).


When I approach Sun-Japan directly I just get told that they don't 
speak  English.  When my Japanese colleagues approach Sun-Japan 
directly, it is  suggested to us that we stay with our current Vendor.


* Will there be official Solaris 10, or OpenSolaris releases with ZFS 
User quotas? (Will 2010.02 contain ZFS User quotas?)


http://sparcv9.blogspot.com/2009/08/solaris-10-update-8-1009-is-comming.html 


which is in no way official, says it'll be in 10u8 which should be
coming within a month.

/Tomas




--
Enda O'Connor x19781  Software Product Engineering
Patch System Test : Ireland : x19781/353-1-8199718
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sun Flash Accelerator F20

2009-09-24 Thread Enda O'Connor

Richard Elling wrote:

On Sep 24, 2009, at 12:20 AM, James Andrewartha wrote:

I'm surprised no-one else has posted about this - part of the Sun 
Oracle Exadata v2 is the Sun Flash Accelerator F20 PCIe card, with 48 
or 96 GB of SLC, a built-in SAS controller and a super-capacitor for 
cache protection. 
http://www.sun.com/storage/disk_systems/sss/f20/specs.xml


At the Exadata-2 announcement, Larry kept saying that it wasn't a disk.  
But there
was little else of a technical nature said, though John did have one to 
show.


RAC doesn't work with ZFS directly, so the details of the configuration 
should prove

interesting.


isn't exadata based on linux, so not clear where zfs comes into play, 
but I didn't see any of this oracle preso, so could be confused by all this.


Enda

 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS ARC vs Oracle cache

2009-09-24 Thread Enda O'Connor

Richard Elling wrote:

On Sep 24, 2009, at 10:30 AM, Javier Conde wrote:


Hello,

Given the following configuration:

* Server with 12 SPARCVII CPUs  and 96 GB of RAM
* ZFS used as file system for Oracle data
* Oracle 10.2.0.4 with 1.7TB of data and indexes
* 1800 concurrents users with PeopleSoft Financial
* 2 PeopleSoft transactions per day
* HDS USP1100 with LUNs stripped on 6 parity groups (450xRAID7+1), 
total 48 disks

* 2x 4Gbps FC with MPxIO

Which is the best Oracle SGA size to avoid cache duplication between 
Oracle and ZFS?


Is it better to have a small SGA + big ZFS ARC or large SGA + small 
ZFS ARC?


Who does a better cache for overall performance?


In general, it is better to cache closer to the consumer (application).

You don't mention what version of Solaris or ZFS you are using.
For later versions, the primarycache property allows you to control the
ARC usage on a per-dataset basis.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Hi
addign oracle-interest
I would suggest some testing but standard recommendation to start with
are keep zfs record size is db block size, keep oracle log writer to 
it's own pool ( 128k recordsize is recommended I believe for this one ), 
the log writer is a io limiting factor as such , use latest Ku's for 
solaris as they contain some critical fixes for zfs/oracle, ie 6775697 
for instance.  Small SGA is not usually recommended, but of course a lot 
depends on application layer as well, I can only say test with the 
recommendations above and then deviate from there, perhaps keeping zil 
on separate high latency device might help ( again only analysis can 
determine all that ). Then remember that even after that with a large 
SGA etc, sometimes perf can degrade, ie might need to instruct oracle to 
actually cache, via alter table cache command etc.


getting familiar with statspack aws will be a must here :-) as only an 
analysis of Oracle from an oracle point of view can really tell what is 
workign as such.


Enda


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS LDOMs Jumpstart virtual disk issue

2009-09-14 Thread Enda O'Connor

RB wrote:
I have zfs on my base T5210 box installed with LDOMS (v.1.0.3).  Every time I try to jumpstart my Guest machine, I get the following error. 



ERROR: One or more disks are found, but one of the following problems exists:
- Hardware failure
- The disk(s) available on this system cannot be used to install 
Solaris Software. They do not have a valid label. If you want to use the 
disk(s) for the install, use format(1M) to label the disk and restart the 
installation.
Solaris installation program exited.


If I try to label the disk using format, I get the following error
format label
Ready to label disk, continue? y

Warning: error writing EFI.
Label failed.

Any help would be appreciated.

run format -e then label and select SMI,
this will erase any data on said disks by the way.

Enda
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS flar image.

2009-09-14 Thread Enda O'Connor

RB wrote:

Is it possible to create flar image of ZFS root filesystem to install it to 
other macines?


yes but it needs solaris update 7 or later to install a zfs flar
see

http://www.opensolaris.org/os/community/zfs/boot/flash/;jsessionid=AB24EEFB6955AD505F19A152CDEC84A8

isn't supported on opensolaris by the way.

Enda
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Boot error

2009-08-28 Thread Enda O'Connor

Hi
What does boot -L show you?

Enda

On 08/28/09 15:59, cindy.swearin...@sun.com wrote:

Hi Grant,

I've had no more luck researching this, mostly because the error message 
can mean different things in different scenarios.


I did try to reproduce it and I can't.

I noticed you are booting using boot -s, which I think means the system 
will boot from the default boot disk, not the newly added disk.


Can you boot from the secondary boot disk directly by using the boot
path? On my 280r system, I would boot from the secondary disk like this:

ok boot /p...@8,60/SUNW,q...@4/f...@0,0/d...@0,0

Cindy


On 08/27/09 23:54, Grant Lowe wrote:

Hi Cindy,

I tried booting from DVD but nothing showed up.  Thanks for the ideas, 
though.  Maybe your other sources might have something?




- Original Message 
From: Cindy Swearingen cindy.swearin...@sun.com
To: Grant Lowe gl...@sbcglobal.net
Cc: zfs-discuss@opensolaris.org
Sent: Thursday, August 27, 2009 6:24:00 PM
Subject: Re: [zfs-discuss] Boot error

Hi Grant,

I don't have all my usual resources at the moment, but I would boot 
from alternate media and use the format utility to check the 
partitioning on newly added disk, and look for something like 
overlapping partitions. Or, possibly, a mismatch between

the actual root slice and the one you are trying to boot from.

Cindy

- Original Message -
From: Grant Lowe gl...@sbcglobal.net
Date: Thursday, August 27, 2009 5:06 pm
Subject: [zfs-discuss] Boot error
To: zfs-discuss@opensolaris.org


I've got a 240z with Solaris 10 Update 7, all the latest patches from 
Sunsolve.  I've installed a boot drive with ZFS.  I mirrored the 
drive with zpool.  I installed the boot block.  The system had been 
working just fine.  But for some reason, when I try to boot, I get 
the error:


{1} ok boot -s
Boot device: /p...@1c,60/s...@2/d...@0,0  File and args: -s
SunOS Release 5.10 Version Generic_141414-08 64-bit
Copyright 1983-2009 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.
Division by Zero
{1} ok

Any ideas?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--
Enda O'Connor x19781  Software Product Engineering
Patch System Test : Ireland : x19781/353-1-8199718
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs root, jumpstart and flash archives

2009-07-08 Thread Enda O'Connor

Hi
for sparc
119534-15
124630-26


for x86
119535-15
124631-27

higher rev's of these will also suffice.

Note these need to be applied to the miniroot of the jumpstart image so 
that it can then install zfs flash archive.
 please read the README notes in these for more specific instructions, 
including instructions on miniroot patching.


Enda

Fredrich Maney wrote:

Any idea what the Patch ID was?

fpsm

On Wed, Jul 8, 2009 at 3:43 PM, Bob
Friesenhahnbfrie...@simple.dallas.tx.us wrote:

On Wed, 8 Jul 2009, Jerry K wrote:


It has been a while since this has been discussed, and I am hoping that
you can provide an update, or time estimate.  As we are several months into
Update 7, is there any chance of an Update 7 patch, or are we still waiting
for Update 8.

I saw that a Solaris 10 patch for supporting Flash archives on ZFS came out
about a week ago.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Confused about prerequisites for ZFS to work

2009-02-19 Thread Enda O'Connor

On 02/19/09 13:14, Harry Putnam wrote:

Blake blake.ir...@gmail.com writes:


[...]


I found this entry helpful:

http://blogs.sun.com/timthomas/entry/solaris_cifs_in_workgroup_mode


There is a comment in those directions about installing a SMB PAM
module:
  6. Install the SMB PAM module

  Add the below line to the end of /etc/pam.conf:

  other   password required   pam_smb_passwd.so.1 nowarn

Do you know what that is?

I don't find any pkg named like that here.

   pkg search -r pam|grep smb
   nada


try pkg search -r pam_smb_passwd.so.1

which gives SUNWsmbs package

Enda


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



--
Enda O'Connor x19781  Software Product Engineering
Patch System Test : Ireland : x19781/353-1-8199718
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Confused about prerequisites for ZFS to work

2009-02-19 Thread Enda O'Connor

On 02/19/09 13:20, James C. McPherson wrote:

On Thu, 19 Feb 2009 07:14:07 -0600
Harry Putnam rea...@newsguy.com wrote:


Blake blake.ir...@gmail.com writes:


[...]


I found this entry helpful:

http://blogs.sun.com/timthomas/entry/solaris_cifs_in_workgroup_mode

There is a comment in those directions about installing a SMB PAM
module:
  6. Install the SMB PAM module

  Add the below line to the end of /etc/pam.conf:

  other   password required   pam_smb_passwd.so.1 nowarn

Do you know what that is?

I don't find any pkg named like that here.

   pkg search -r pam|grep smb
   nada



You might find it if you searched instead with

$ pkg search -r smb


In SXCE, at least, /usr/lib/security/pam_smb_passwd.so is
part of the SUNWsmbfsu package.


Hi
And looks like 2008.11 has it in SUNWsmbs.

Enda


James
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



--
Enda O'Connor x19781  Software Product Engineering
Patch System Test : Ireland : x19781/353-1-8199718
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] To separate /var or not separate /var, that is the question....

2008-12-11 Thread Enda O'Connor
Vincent Fox wrote:
 Whether tis nobler.
 
 Just wondering if (excepting the existing zones thread) there are any 
 compelling arguments to keep /var as it's own filesystem for your typical 
 Solaris server.  Web servers and the like.
 
 Or arguments against 
with zfs it's easy to set quotas so not really necessary, in ufs world, 
it was just easier to keep var on a seperate disk slice etc, so that the 
  root FS would not fill with log files, patch data and or core dumps etc

Enda
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs boot - U6 kernel patch breaks sparc boot

2008-12-02 Thread Enda O'Connor
Vincent Fox wrote:
 Reviving this thread.
 
 We have a Solaris 10u4 system recently patched with 137137-09.
 Unfortunately the patch was applied from multi-user mode, I wonder if this
 may have been original posters problem as well?  Anyhow we are now stuck
 with an unbootable system as well.
 
 I have submitted a case to Sun about it, will add details as that proceeds.

Hi

There are basically two possible issue that we are aware of
6772822,  where the root fs has insufficient space to hold the failsafe 
archive ( 181M ) the bootarchive 80M approx, and a rebuild of same when 
rebooting, leading to some possible different outcomes

if you see seek failed it indicates that new bootblk installed ok, but 
it couldn't rebuild on reboot,

There are also issues where if running svm on mpxio, the bootblk won't 
et installed, 6772083 or 6775167

Let us know the exact errror seen and if possible the exact output from 
patchadd 137137-09
Enda

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HELP!!! Need to disable zfs

2008-11-25 Thread Enda O'Connor
Mike DeMarco wrote:
 My root drive is ufs. I have corrupted my zpool which is on a different drive 
 than the root drive.
 My system paniced and now it core dumps when it boots up and hits zfs start. 
 I have a alt root drive that  can boot the system up with but how can I 
 disable zfs from starting on a different drive?
 
 HELP HELP HELP
boot the working alt root drive, mount the other drive to /a
then
mv /a/etc/zfs/zpool.cache /a/etc/zfs/zpool.cache.corrupt

reboot

Enda
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Upgrading from a single disk.

2008-11-19 Thread Enda O'Connor
[EMAIL PROTECTED] wrote:
 Suppose I have a single ZFS pool on a single disk;
 I want to upgrade the system to use two different, larger disks
 and I want to mirror.
 
 Can I do something like:
 
   - I start with disk #0
   - add mirror on disk #1
 (resilver)
   - replace first disk (#0) with disk #2
 (resilver)
 
 Casper
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
make sure to install the bootblk on disk#2  before removing disk#0, 
zpool doesn't do this if you add a second disk to the system.

Enda
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Upgrading from a single disk.

2008-11-19 Thread Enda O'Connor
Enda O'Connor wrote:
 [EMAIL PROTECTED] wrote:
 Suppose I have a single ZFS pool on a single disk;
 I want to upgrade the system to use two different, larger disks
 and I want to mirror.

 Can I do something like:

  - I start with disk #0
  - add mirror on disk #1
(resilver)
  - replace first disk (#0) with disk #2
(resilver)

 Casper
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 make sure to install the bootblk on disk#2  before removing disk#0, 
 zpool doesn't do this if you add a second disk to the system.
 
 Enda
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi
just to be clear

/sbin/installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/

it works fine for me, once you do the above step if you add a disk to a 
root pool, and then remove the original boot disk.

Enda
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] migrating ufs to zfs - cant boot system

2008-11-06 Thread Enda O'Connor
Hi
try and get the stack trace from the core
ie mdb core.vold.24978
::status
$C
$r

also run the same 3 mdb commands on the cpio core dump.

also if you could extract some data from the truss log, ie a few hundred 
lines before the first SIGBUS


Enda

On 11/06/08 01:25, Krzys wrote:
 THis is so bizare, I am unable to pass this problem. I though I had not 
 enough space on my hard drive (new one) so I replaced it with 72gb 
 drive, but still getting that bus error. Originally when I restarted my 
 server it did not want to boot, do I had to power it off and then back 
 on and it then booted up. But constantly I am getting this Bus Error - 
 core dumped
 
 anyway in my /var/crash I see hundreds of core.void files and 3 
 core.cpio files. I would imagine core.cpio are the ones that are direct 
 result of what I am probably eperiencing.
 
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24854
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24867
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24880
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24893
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24906
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24919
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24932
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24950
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24978
 drwxr-xr-x   3 root root   81408 Nov  5 20:06 .
 -rw---   1 root root 31351099 Nov  5 20:06 core.cpio.6208
 
 
 
 On Wed, 5 Nov 2008, Enda O'Connor wrote:
 
 Hi
 Looks ok, some mounts left over from pervious fail.
 In regards to swap and dump on zpool you can set them
 zfs set volsize=1G rootpool/dump
 zfs set volsize=1G rootpool/swap

 for instance, of course above are only an example of how to do it.
 or make the zvol doe rootpool/dump etc before lucreate, in which case 
 it will take the swap and dump size you have preset.

 But I think we need to see the coredump/truss at this point to get an 
 idea of where things went wrong.
 Enda

 On 11/05/08 15:38, Krzys wrote:
 I did upgrade my U5 to U6 from DVD, went trough the upgrade process.
 my file system is setup as follow:
 [10:11:54] [EMAIL PROTECTED]: /root  df -h | egrep -v 
 platform|sharefs|objfs|mnttab|proc|ctfs|devices|fd|nsr
 Filesystem size   used  avail capacity  Mounted on
 /dev/dsk/c1t0d0s0   16G   7.2G   8.4G47%/
 swap   8.3G   1.5M   8.3G 1%/etc/svc/volatile
 /dev/dsk/c1t0d0s6   16G   8.7G   6.9G56%/usr
 /dev/dsk/c1t0d0s1   16G   2.5G13G17%/var
 swap   8.5G   229M   8.3G 3%/tmp
 swap   8.3G40K   8.3G 1%/var/run
 /dev/dsk/c1t0d0s7   78G   1.2G76G 2%/export/home
 rootpool33G19K21G 1%/rootpool
 rootpool/ROOT   33G18K21G 1%/rootpool/ROOT
 rootpool/ROOT/zfsBE 33G31M21G 1%/.alt.tmp.b-UUb.mnt
 /export/home78G   1.2G76G 2% 
 /.alt.tmp.b-UUb.mnt/export/home
 /rootpool   21G19K21G 1% 
 /.alt.tmp.b-UUb.mnt/rootpool
 /rootpool/ROOT  21G18K21G 1% 
 /.alt.tmp.b-UUb.mnt/rootpool/ROOT
 swap   8.3G 0K   8.3G 0% 
 /.alt.tmp.b-UUb.mnt/var/run
 swap   8.3G 0K   8.3G 0%
 /.alt.tmp.b-UUb.mnt/tmp
 [10:12:00] [EMAIL PROTECTED]: /root 


 so I have /, /usr, /var and /export/home on that primary disk. 
 Original disk is 140gb, this new one is only 36gb, but disk 
 utilization on that primary disk is much less utilized so easily 
 should fit on it.

 / 7.2GB
 /usr 8.7GB
 /var 2.5GB
 /export/home 1.2GB
 total space 19.6GB
 I did notice that lucreate did alocate 8GB to SWAP and 4GB to DUMP
 total space needed 31.6GB
 seems like total available disk space on my disk should be 33.92GB
 so its quite close as both numbers do approach. So to make sure I 
 will change disk for 72gb and will try again. I do not beleive that I 
 need to match my main disk size as 146gb as I am not using that much 
 disk space on it. But let me try this and it might be why I am 
 getting this problem...



 On Wed, 5 Nov 2008, Enda O'Connor wrote:

 Hi Krzys
 Also some info on the actual system
 ie what was it upgraded to u6 from and how.
 and an idea of how the filesystems are laid out, ie is usr seperate 
 from / and so on ( maybe a df -k ). Don't appear to have any zones 
 installed, just to confirm.
 Enda

 On 11/05/08 14:07, Enda O'Connor wrote:
 Hi
 did you get a core dump?
 would be nice to see the core file to get an idea of what dumped core,
 might configure coreadm if not already done
 run coreadm first, if the output looks like

 # coreadm
  global core file pattern: /var/crash/core.%f.%p
  global core file content: default
init core file pattern: core

Re: [zfs-discuss] migrating ufs to zfs - cant boot system

2008-11-06 Thread Enda O'Connor
Hi
Wierd, almost like some kind of memory corruption.

Could I see the upgrade logs, that got you to u6
ie
/var/sadm/system/logs/upgrade_log
for the u6 env.
What kind of upgrade did you do, liveupgrade, text based etc?

Enda

On 11/06/08 15:41, Krzys wrote:
 Seems like core.vold.* are not being created until I try to boot from zfsBE, 
 just creating zfsBE gets onlu core.cpio created.
 
 
 
 [10:29:48] @adas: /var/crash  mdb core.cpio.5545
 Loading modules: [ libc.so.1 libavl.so.1 ld.so.1 ]
 ::status
 debugging core file of cpio (32-bit) from adas
 file: /usr/bin/cpio
 initial argv: /usr/bin/cpio -pPcdum /.alt.tmp.b-Prb.mnt
 threading model: multi-threaded
 status: process terminated by SIGBUS (Bus Error)
 $C
 ffbfe5b0 libc.so.1`_malloc_unlocked+0x164(30, 0, 39c28, ff, 2e2f2e2f, 0)
 ffbfe610 libc.so.1`malloc+0x4c(30, 1, e8070, 0, ff33e3c0, ff3485b8)
 ffbfe670 libsec.so.1`cacl_get+0x138(ffbfe7c4, 2, 0, 35bc0, 0, 35f98)
 ffbfe768 libsec.so.1`acl_get+0x14(37fe2, 2, 35bc0, 354c0, 1000, 1)
 ffbfe7d0 0x183b4(1, 35800, 359e8, 346b0, 34874, 34870)
 ffbfec30 main+0x28c(34708, 1, 35bc0, 166fc, 35800, 34400)
 ffbfec90 _start+0x108(0, 0, 0, 0, 0, 0)
 $r
 %g0 = 0x %l0 = 0x
 %g1 = 0xff25638c libc.so.1`malloc+0x44 %l1 = 0x00039c28
 %g2 = 0x00037fe0 %l2 = 0x2e2f2e2f
 %g3 = 0x8000 %l3 = 0x03c8
 %g4 = 0x %l4 = 0x2e2f2e2f
 %g5 = 0x %l5 = 0x
 %g6 = 0x %l6 = 0xdc00
 %g7 = 0xff382a00 %l7 = 0xff347344 libc.so.1`Lfree
 %o0 = 0x %i0 = 0x0030
 %o1 = 0x %i1 = 0x
 %o2 = 0x000e70c4 %i2 = 0x00039c28
 %o3 = 0x %i3 = 0x00ff
 %o4 = 0xff33e3c0 %i4 = 0x2e2f2e2f
 %o5 = 0xff347344 libc.so.1`Lfree %i5 = 0x
 %o6 = 0xffbfe5b0 %i6 = 0xffbfe610
 %o7 = 0xff2564a4 libc.so.1`_malloc_unlocked+0xf4 %i7 = 0xff256394
 libc.so.1`malloc+0x4c
 
   %psr = 0xfe001002 impl=0xf ver=0xe icc=nzvc
 ec=0 ef=4096 pil=0 s=0 ps=0 et=0 cwp=0x2
 %y = 0x
%pc = 0xff256514 libc.so.1`_malloc_unlocked+0x164
   %npc = 0xff2564d8 libc.so.1`_malloc_unlocked+0x128
%sp = 0xffbfe5b0
%fp = 0xffbfe610
 
   %wim = 0x
   %tbr = 0x
 
 
 
 
 
 
 
 On Thu, 6 Nov 2008, Enda O'Connor wrote:
 
 Hi
 try and get the stack trace from the core
 ie mdb core.vold.24978
 ::status
 $C
 $r

 also run the same 3 mdb commands on the cpio core dump.

 also if you could extract some data from the truss log, ie a few hundred 
 lines before the first SIGBUS


 Enda

 On 11/06/08 01:25, Krzys wrote:
 THis is so bizare, I am unable to pass this problem. I though I had not 
 enough space on my hard drive (new one) so I replaced it with 72gb drive, 
 but still getting that bus error. Originally when I restarted my server it 
 did not want to boot, do I had to power it off and then back on and it then 
 booted up. But constantly I am getting this Bus Error - core dumped

 anyway in my /var/crash I see hundreds of core.void files and 3 core.cpio 
 files. I would imagine core.cpio are the ones that are direct result of 
 what I am probably eperiencing.

 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24854
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24867
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24880
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24893
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24906
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24919
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24932
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24950
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24978
 drwxr-xr-x   3 root root   81408 Nov  5 20:06 .
 -rw---   1 root root 31351099 Nov  5 20:06 core.cpio.6208



 On Wed, 5 Nov 2008, Enda O'Connor wrote:

 Hi
 Looks ok, some mounts left over from pervious fail.
 In regards to swap and dump on zpool you can set them
 zfs set volsize=1G rootpool/dump
 zfs set volsize=1G rootpool/swap

 for instance, of course above are only an example of how to do it.
 or make the zvol doe rootpool/dump etc before lucreate, in which case it 
 will take the swap and dump size you have preset.

 But I think we need to see the coredump/truss at this point to get an idea 
 of where things went wrong.
 Enda

 On 11/05/08 15:38, Krzys wrote:
 I did upgrade my U5 to U6 from DVD, went trough the upgrade process.
 my file system is setup as follow:
 [10:11:54] [EMAIL PROTECTED]: /root  df -h | egrep -v 
 platform|sharefs|objfs|mnttab|proc|ctfs|devices|fd|nsr
 Filesystem size   used  avail capacity  Mounted on
 /dev/dsk/c1t0d0s0   16G   7.2G   8.4G47%/
 swap   8.3G   1.5M   8.3G 1

Re: [zfs-discuss] migrating ufs to zfs - cant boot system

2008-11-05 Thread Enda O'Connor
:277)
 Nov  5 02:44:28 adas root:  = [EMAIL PROTECTED] =null 
 at com.sun.patchpro.util.State.run(State.java:266)
 Nov  5 02:44:28 adasat java.lang.Thread.run(Thread.java:595)
 
 
 [07:36:43] @adas: /root  lustatus
 Boot Environment   Is   Active ActiveCanCopy
 Name   Complete NowOn Reboot Delete Status
 --  -- - -- --
 ufsBE  yes  yesyes   no -
 zfsBE  yes  no noyes-
 [07:36:52] @adas: /root  luactivate zfsBE
 A Live Upgrade Sync operation will be performed on startup of boot 
 environment 
 zfsBE.
 
 
 **
 
 The target boot environment has been activated. It will be used when you
 reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
 MUST USE either the init or the shutdown command when you reboot. If you
 do not use either init or shutdown, the system will not boot using the
 target BE.
 
 **
 
 In case of a failure while booting to the target BE, the following process
 needs to be followed to fallback to the currently working boot environment:
 
 1. Enter the PROM monitor (ok prompt).
 
 2. Change the boot device back to the original boot environment by typing:
 
   setenv boot-device /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL 
 PROTECTED],0:a
 
 3. Boot to the original boot environment by typing:
 
   boot
 
 **
 
 Activation of boot environment zfsBE successful.
 [07:37:52] @adas: /root  init 0
 [07:38:44] @adas: /root  stopping NetWorker daemons:
   nsr_shutdown -q
 svc.startd: The system is coming down.  Please wait.
 svc.startd: 89 system services are now being stopped.
 Nov  5 07:39:39 adas syslogd: going down on signal 15
 svc.startd: The system is down.
 syncing file systems... done
 Program terminated
 {0} ok boot
 
 SC Alert: Host System has Reset
 Probing system devices
 Probing memory
 Probing I/O buses
 
 Sun Fire V210, No Keyboard
 Copyright 2007 Sun Microsystems, Inc.  All rights reserved.
 OpenBoot 4.22.33, 4096 MB memory installed, Serial #64938415.
 Ethernet address 0:3:ba:de:e1:af, Host ID: 83dee1af.
 
 
 
 Rebooting with command: boot
 Boot device: /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL 
 PROTECTED],0:a  File and args:
 
 Can't open boot_archive
 
 Evaluating:
 The file just loaded does not appear to be executable.
 {1} ok boot disk2
 Boot device: /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0  
 File and args:
 ERROR: /[EMAIL PROTECTED],60: Last Trap: Fast Data Access MMU Miss
 
 {1} ok boot disk1
 Boot device: /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0  
 File and args:
 ERROR: /[EMAIL PROTECTED],60: Last Trap: Fast Data Access MMU Miss
 
 {1} ok setenv boot-device /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL 
 PROTECTED],0:a
 boot-device =   /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL 
 PROTECTED],0:a
 {1} ok boot
 Boot device: /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL 
 PROTECTED],0:a  File and args:
 ERROR: /[EMAIL PROTECTED],60: Last Trap: Fast Data Access MMU Miss
 
 {1} ok boot disk
 Boot device: /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0  
 File and args:
 ERROR: /[EMAIL PROTECTED],60: Last Trap: Fast Data Access MMU Miss
 
 {1} ok setenv boot-device /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL 
 PROTECTED],0:a
 boot-device =   /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL 
 PROTECTED],0:a
 {1} ok boot
 Boot device: /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL 
 PROTECTED],0:a  File and args:
 ERROR: /[EMAIL PROTECTED],60: Last Trap: Fast Data Access MMU Miss
 
 {1} ok
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


-- 
Enda O'Connor x19781  Software Product Engineering
Patch System Test : Ireland : x19781/353-1-8199718
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] migrating ufs to zfs - cant boot system

2008-11-05 Thread Enda O'Connor
Hi Krzys
Also some info on the actual system
ie what was it upgraded to u6 from and how.
and an idea of how the filesystems are laid out, ie is usr seperate from 
/ and so on ( maybe a df -k ). Don't appear to have any zones installed, 
just to confirm.
Enda

On 11/05/08 14:07, Enda O'Connor wrote:
 Hi
 did you get a core dump?
 would be nice to see the core file to get an idea of what dumped core,
 might configure coreadm if not already done
 run coreadm first, if the output looks like
 
 # coreadm
  global core file pattern: /var/crash/core.%f.%p
  global core file content: default
init core file pattern: core
init core file content: default
 global core dumps: enabled
per-process core dumps: enabled
   global setid core dumps: enabled
  per-process setid core dumps: disabled
  global core dump logging: enabled
 
 then all should be good, and cores should appear in /var/crash
 
 otherwise the following should configure coreadm:
 coreadm -g /var/crash/core.%f.%p
 coreadm -G all
 coreadm -e global
 coreadm -e per-process
 
 
 coreadm -u to load the new settings without rebooting.
 
 also might need to set the size of the core dump via
 ulimit -c unlimited
 check ulimit -a first.
 
 then rerun test and check /var/crash for core dump.
 
 If that fails a truss via say truss -fae -o /tmp/truss.out lucreate -c 
 ufsBE -n zfsBE -p rootpool
 
 might give an indication, look for SIGBUS in the truss log
 
 NOTE, that you might want to reset the coreadm and ulimit for coredumps 
 after this, in order to not risk filling the system with coredumps in 
 the case of some utility coredumping in a loop say.
 
 
 Enda
 On 11/05/08 13:46, Krzys wrote:

 On Wed, 5 Nov 2008, Enda O'Connor wrote:

 On 11/05/08 13:02, Krzys wrote:
 I am not sure what I did wrong but I did follow up all the steps to 
 get my system moved from ufs to zfs and not I am unable to boot 
 it... can anyone suggest what I could do to fix it?

 here are all my steps:

 [00:26:38] @adas: /root  zpool create rootpool c1t1d0s0
 [00:26:57] @adas: /root  lucreate -c ufsBE -n zfsBE -p rootpool
 Analyzing system configuration.
 Comparing source boot environment ufsBE file systems with the file
 system(s) you specified for the new boot environment. Determining which
 file systems should be in the new boot environment.
 Updating boot environment description database on all BEs.
 Updating system configuration files.
 The device /dev/dsk/c1t1d0s0 is not a root device for any boot 
 environment; cannot get BE ID.
 Creating configuration for boot environment zfsBE.
 Source boot environment is ufsBE.
 Creating boot environment zfsBE.
 Creating file systems on boot environment zfsBE.
 Creating zfs file system for / in zone global on 
 rootpool/ROOT/zfsBE.
 Populating file systems on boot environment zfsBE.
 Checking selection integrity.
 Integrity check OK.
 Populating contents of mount point /.
 Copying.
 Bus Error - core dumped
 hmm above might be relevant I'd guess.

 What release are you on , ie is this Solaris 10, or is this Nevada 
 build?

 Enda
 Creating shared file system mount points.
 Creating compare databases for boot environment zfsBE.
 Creating compare database for file system /var.
 Creating compare database for file system /usr.
 Creating compare database for file system /rootpool/ROOT.
 Creating compare database for file system /.
 Updating compare databases on boot environment zfsBE.
 Making boot environment zfsBE bootable.

 Anyway I did restart the whole process again, and I got again that Bus 
 Error

 [07:59:01] [EMAIL PROTECTED]: /root  zpool create rootpool c1t1d0s0
 [07:59:22] [EMAIL PROTECTED]: /root  zfs set compression=on rootpool/ROOT
 cannot open 'rootpool/ROOT': dataset does not exist
 [07:59:27] [EMAIL PROTECTED]: /root  zfs set compression=on rootpool
 [07:59:31] [EMAIL PROTECTED]: /root  lucreate -c ufsBE -n zfsBE -p rootpool
 Analyzing system configuration.
 Comparing source boot environment ufsBE file systems with the file
 system(s) you specified for the new boot environment. Determining which
 file systems should be in the new boot environment.
 Updating boot environment description database on all BEs.
 Updating system configuration files.
 The device /dev/dsk/c1t1d0s0 is not a root device for any boot 
 environment; cannot get BE ID.
 Creating configuration for boot environment zfsBE.
 Source boot environment is ufsBE.
 Creating boot environment zfsBE.
 Creating file systems on boot environment zfsBE.
 Creating zfs file system for / in zone global on 
 rootpool/ROOT/zfsBE.
 Populating file systems on boot environment zfsBE.
 Checking selection integrity.
 Integrity check OK.
 Populating contents of mount point /.
 Copying.
 Bus Error - core dumped
 Creating shared file system mount points.
 Creating compare databases for boot environment zfsBE.
 Creating compare database for file system /var.
 Creating compare database for file system /usr

Re: [zfs-discuss] migrating ufs to zfs - cant boot system

2008-11-05 Thread Enda O'Connor
Hi
did you get a core dump?
would be nice to see the core file to get an idea of what dumped core,
might configure coreadm if not already done
run coreadm first, if the output looks like

# coreadm
  global core file pattern: /var/crash/core.%f.%p
  global core file content: default
init core file pattern: core
init core file content: default
 global core dumps: enabled
per-process core dumps: enabled
   global setid core dumps: enabled
  per-process setid core dumps: disabled
  global core dump logging: enabled

then all should be good, and cores should appear in /var/crash

otherwise the following should configure coreadm:
coreadm -g /var/crash/core.%f.%p
coreadm -G all
coreadm -e global
coreadm -e per-process


coreadm -u to load the new settings without rebooting.

also might need to set the size of the core dump via
ulimit -c unlimited
check ulimit -a first.

then rerun test and check /var/crash for core dump.

If that fails a truss via say truss -fae -o /tmp/truss.out lucreate -c 
ufsBE -n zfsBE -p rootpool

might give an indication, look for SIGBUS in the truss log

NOTE, that you might want to reset the coreadm and ulimit for coredumps 
after this, in order to not risk filling the system with coredumps in 
the case of some utility coredumping in a loop say.


Enda
On 11/05/08 13:46, Krzys wrote:
 
 On Wed, 5 Nov 2008, Enda O'Connor wrote:
 
 On 11/05/08 13:02, Krzys wrote:
 I am not sure what I did wrong but I did follow up all the steps to get my 
 system moved from ufs to zfs and not I am unable to boot it... can anyone 
 suggest what I could do to fix it?

 here are all my steps:

 [00:26:38] @adas: /root  zpool create rootpool c1t1d0s0
 [00:26:57] @adas: /root  lucreate -c ufsBE -n zfsBE -p rootpool
 Analyzing system configuration.
 Comparing source boot environment ufsBE file systems with the file
 system(s) you specified for the new boot environment. Determining which
 file systems should be in the new boot environment.
 Updating boot environment description database on all BEs.
 Updating system configuration files.
 The device /dev/dsk/c1t1d0s0 is not a root device for any boot 
 environment; cannot get BE ID.
 Creating configuration for boot environment zfsBE.
 Source boot environment is ufsBE.
 Creating boot environment zfsBE.
 Creating file systems on boot environment zfsBE.
 Creating zfs file system for / in zone global on 
 rootpool/ROOT/zfsBE.
 Populating file systems on boot environment zfsBE.
 Checking selection integrity.
 Integrity check OK.
 Populating contents of mount point /.
 Copying.
 Bus Error - core dumped
 hmm above might be relevant I'd guess.

 What release are you on , ie is this Solaris 10, or is this Nevada build?

 Enda
 Creating shared file system mount points.
 Creating compare databases for boot environment zfsBE.
 Creating compare database for file system /var.
 Creating compare database for file system /usr.
 Creating compare database for file system /rootpool/ROOT.
 Creating compare database for file system /.
 Updating compare databases on boot environment zfsBE.
 Making boot environment zfsBE bootable.
 
 Anyway I did restart the whole process again, and I got again that Bus Error
 
 [07:59:01] [EMAIL PROTECTED]: /root  zpool create rootpool c1t1d0s0
 [07:59:22] [EMAIL PROTECTED]: /root  zfs set compression=on rootpool/ROOT
 cannot open 'rootpool/ROOT': dataset does not exist
 [07:59:27] [EMAIL PROTECTED]: /root  zfs set compression=on rootpool
 [07:59:31] [EMAIL PROTECTED]: /root  lucreate -c ufsBE -n zfsBE -p rootpool
 Analyzing system configuration.
 Comparing source boot environment ufsBE file systems with the file
 system(s) you specified for the new boot environment. Determining which
 file systems should be in the new boot environment.
 Updating boot environment description database on all BEs.
 Updating system configuration files.
 The device /dev/dsk/c1t1d0s0 is not a root device for any boot environment; 
 cannot get BE ID.
 Creating configuration for boot environment zfsBE.
 Source boot environment is ufsBE.
 Creating boot environment zfsBE.
 Creating file systems on boot environment zfsBE.
 Creating zfs file system for / in zone global on rootpool/ROOT/zfsBE.
 Populating file systems on boot environment zfsBE.
 Checking selection integrity.
 Integrity check OK.
 Populating contents of mount point /.
 Copying.
 Bus Error - core dumped
 Creating shared file system mount points.
 Creating compare databases for boot environment zfsBE.
 Creating compare database for file system /var.
 Creating compare database for file system /usr.
 
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


-- 
Enda O'Connor x19781  Software Product Engineering
Patch System Test : Ireland : x19781/353-1-8199718
___
zfs-discuss mailing list
zfs-discuss

Re: [zfs-discuss] migrating ufs to zfs - cant boot system

2008-11-05 Thread Enda O'Connor
Hi
No that should be fine, as long as disk is SMI labelled then that's 
fine, and lU would have failed much earlier if it found an EFI labelled 
disk.

core dump is not due to this, something else is causing that.
Enda

On 11/05/08 15:14, Krzys wrote:
 Great, I will follow this, but I was wondering maybe I did not setup my 
 disc correctly? from what I do understand zpool cannot be setup on whole 
 disk as other pools are so I did partition my disk so all the space is 
 in s0 slice. Maybe I thats not correct?
 
 [10:03:45] [EMAIL PROTECTED]: /root  format
 Searching for disks...done
 
 
 AVAILABLE DISK SELECTIONS:
0. c1t0d0 SEAGATE-ST3146807LC-0007 cyl 49780 alt 2 hd 8 sec 720
   /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
1. c1t1d0 SUN36G cyl 24620 alt 2 hd 27 sec 107
   /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
 Specify disk (enter its number): 1
 selecting c1t1d0
 [disk formatted]
 /dev/dsk/c1t1d0s0 is part of active ZFS pool rootpool. Please see 
 zpool(1M).
 /dev/dsk/c1t1d0s2 is part of active ZFS pool rootpool. Please see 
 zpool(1M).
 
 
 FORMAT MENU:
 disk   - select a disk
 type   - select (define) a disk type
 partition  - select (define) a partition table
 current- describe the current disk
 format - format and analyze the disk
 repair - repair a defective sector
 label  - write label to the disk
 analyze- surface analysis
 defect - defect list management
 backup - search for backup labels
 verify - read and display labels
 save   - save new disk/partition definitions
 inquiry- show vendor, product and revision
 volname- set 8-character volume name
 !cmd - execute cmd, then return
 quit
 format verify
 
 Primary label contents:
 
 Volume name = 
 ascii name  = SUN36G cyl 24620 alt 2 hd 27 sec 107
 pcyl= 24622
 ncyl= 24620
 acyl=2
 nhead   =   27
 nsect   =  107
 Part  TagFlag Cylinders SizeBlocks
   0   rootwm   0 - 24619   33.92GB(24620/0/0) 71127180
   1 unassignedwu   00 (0/0/0)0
   2 backupwm   0 - 24619   33.92GB(24620/0/0) 71127180
   3 unassignedwu   00 (0/0/0)0
   4 unassignedwu   00 (0/0/0)0
   5 unassignedwu   00 (0/0/0)0
   6 unassignedwu   00 (0/0/0)0
   7 unassignedwu   00 (0/0/0)0
 
 format
 
 
 On Wed, 5 Nov 2008, Enda O'Connor wrote:
 
 Hi
 did you get a core dump?
 would be nice to see the core file to get an idea of what dumped core,
 might configure coreadm if not already done
 run coreadm first, if the output looks like

 # coreadm
 global core file pattern: /var/crash/core.%f.%p
 global core file content: default
   init core file pattern: core
   init core file content: default
global core dumps: enabled
   per-process core dumps: enabled
  global setid core dumps: enabled
 per-process setid core dumps: disabled
 global core dump logging: enabled

 then all should be good, and cores should appear in /var/crash

 otherwise the following should configure coreadm:
 coreadm -g /var/crash/core.%f.%p
 coreadm -G all
 coreadm -e global
 coreadm -e per-process


 coreadm -u to load the new settings without rebooting.

 also might need to set the size of the core dump via
 ulimit -c unlimited
 check ulimit -a first.

 then rerun test and check /var/crash for core dump.

 If that fails a truss via say truss -fae -o /tmp/truss.out lucreate -c 
 ufsBE -n zfsBE -p rootpool

 might give an indication, look for SIGBUS in the truss log

 NOTE, that you might want to reset the coreadm and ulimit for 
 coredumps after this, in order to not risk filling the system with 
 coredumps in the case of some utility coredumping in a loop say.


 Enda
 On 11/05/08 13:46, Krzys wrote:

 On Wed, 5 Nov 2008, Enda O'Connor wrote:

 On 11/05/08 13:02, Krzys wrote:
 I am not sure what I did wrong but I did follow up all the steps to 
 get my system moved from ufs to zfs and not I am unable to boot 
 it... can anyone suggest what I could do to fix it?

 here are all my steps:

 [00:26:38] @adas: /root  zpool create rootpool c1t1d0s0
 [00:26:57] @adas: /root  lucreate -c ufsBE -n zfsBE -p rootpool
 Analyzing system configuration.
 Comparing source boot environment ufsBE file systems with the file
 system(s) you specified for the new boot environment. Determining 
 which
 file systems should be in the new boot environment.
 Updating boot environment description database on all BEs.
 Updating system configuration files.
 The device /dev/dsk

Re: [zfs-discuss] migrating ufs to zfs - cant boot system

2008-11-05 Thread Enda O'Connor
Hi
Looks ok, some mounts left over from pervious fail.
In regards to swap and dump on zpool you can set them
zfs set volsize=1G rootpool/dump
zfs set volsize=1G rootpool/swap

for instance, of course above are only an example of how to do it.
or make the zvol doe rootpool/dump etc before lucreate, in which case it 
will take the swap and dump size you have preset.

But I think we need to see the coredump/truss at this point to get an 
idea of where things went wrong.
Enda

On 11/05/08 15:38, Krzys wrote:
 I did upgrade my U5 to U6 from DVD, went trough the upgrade process.
 my file system is setup as follow:
 [10:11:54] [EMAIL PROTECTED]: /root  df -h | egrep -v 
 platform|sharefs|objfs|mnttab|proc|ctfs|devices|fd|nsr
 Filesystem size   used  avail capacity  Mounted on
 /dev/dsk/c1t0d0s0   16G   7.2G   8.4G47%/
 swap   8.3G   1.5M   8.3G 1%/etc/svc/volatile
 /dev/dsk/c1t0d0s6   16G   8.7G   6.9G56%/usr
 /dev/dsk/c1t0d0s1   16G   2.5G13G17%/var
 swap   8.5G   229M   8.3G 3%/tmp
 swap   8.3G40K   8.3G 1%/var/run
 /dev/dsk/c1t0d0s7   78G   1.2G76G 2%/export/home
 rootpool33G19K21G 1%/rootpool
 rootpool/ROOT   33G18K21G 1%/rootpool/ROOT
 rootpool/ROOT/zfsBE 33G31M21G 1%/.alt.tmp.b-UUb.mnt
 /export/home78G   1.2G76G 2% 
 /.alt.tmp.b-UUb.mnt/export/home
 /rootpool   21G19K21G 1%
 /.alt.tmp.b-UUb.mnt/rootpool
 /rootpool/ROOT  21G18K21G 1% 
 /.alt.tmp.b-UUb.mnt/rootpool/ROOT
 swap   8.3G 0K   8.3G 0%
 /.alt.tmp.b-UUb.mnt/var/run
 swap   8.3G 0K   8.3G 0%/.alt.tmp.b-UUb.mnt/tmp
 [10:12:00] [EMAIL PROTECTED]: /root 
 
 
 so I have /, /usr, /var and /export/home on that primary disk. Original 
 disk is 140gb, this new one is only 36gb, but disk utilization on that 
 primary disk is much less utilized so easily should fit on it.
 
 / 7.2GB
 /usr 8.7GB
 /var 2.5GB
 /export/home 1.2GB
 total space 19.6GB
 I did notice that lucreate did alocate 8GB to SWAP and 4GB to DUMP
 total space needed 31.6GB
 seems like total available disk space on my disk should be 33.92GB
 so its quite close as both numbers do approach. So to make sure I will 
 change disk for 72gb and will try again. I do not beleive that I need to 
 match my main disk size as 146gb as I am not using that much disk space 
 on it. But let me try this and it might be why I am getting this problem...
 
 
 
 On Wed, 5 Nov 2008, Enda O'Connor wrote:
 
 Hi Krzys
 Also some info on the actual system
 ie what was it upgraded to u6 from and how.
 and an idea of how the filesystems are laid out, ie is usr seperate 
 from / and so on ( maybe a df -k ). Don't appear to have any zones 
 installed, just to confirm.
 Enda

 On 11/05/08 14:07, Enda O'Connor wrote:
 Hi
 did you get a core dump?
 would be nice to see the core file to get an idea of what dumped core,
 might configure coreadm if not already done
 run coreadm first, if the output looks like

 # coreadm
  global core file pattern: /var/crash/core.%f.%p
  global core file content: default
init core file pattern: core
init core file content: default
 global core dumps: enabled
per-process core dumps: enabled
   global setid core dumps: enabled
  per-process setid core dumps: disabled
  global core dump logging: enabled

 then all should be good, and cores should appear in /var/crash

 otherwise the following should configure coreadm:
 coreadm -g /var/crash/core.%f.%p
 coreadm -G all
 coreadm -e global
 coreadm -e per-process


 coreadm -u to load the new settings without rebooting.

 also might need to set the size of the core dump via
 ulimit -c unlimited
 check ulimit -a first.

 then rerun test and check /var/crash for core dump.

 If that fails a truss via say truss -fae -o /tmp/truss.out lucreate 
 -c ufsBE -n zfsBE -p rootpool

 might give an indication, look for SIGBUS in the truss log

 NOTE, that you might want to reset the coreadm and ulimit for 
 coredumps after this, in order to not risk filling the system with 
 coredumps in the case of some utility coredumping in a loop say.


 Enda
 On 11/05/08 13:46, Krzys wrote:

 On Wed, 5 Nov 2008, Enda O'Connor wrote:

 On 11/05/08 13:02, Krzys wrote:
 I am not sure what I did wrong but I did follow up all the steps 
 to get my system moved from ufs to zfs and not I am unable to boot 
 it... can anyone suggest what I could do to fix it?

 here are all my steps:

 [00:26:38] @adas: /root  zpool create rootpool c1t1d0s0
 [00:26:57] @adas: /root  lucreate -c ufsBE -n zfsBE -p rootpool
 Analyzing system configuration.
 Comparing source boot environment ufsBE file systems with the file
 system(s) you specified for the new boot environment. Determining 
 which
 file systems should

Re: [zfs-discuss] Scripting zfs send / receive

2008-09-26 Thread Enda O'Connor ( Sun Micro Systems Ireland)
Hi
Clive King has a nice blog entry showing this in action
http://blogs.sun.com/clive/entry/replication_using_zfs

with associated script at:
http://blogs.sun.com/clive/resource/zfs_repl.ksh

Which I think answers most of your questions.

Enda
Ross wrote:
 Hey folks,
 
 Is anybody able to help a Solaris scripting newbie with this? I want to put 
 together an automatic script to take snapshots on one system and send them 
 across to another. I've shown the manual process works, but only have a very 
 basic idea about how I'm going to automate this.
 
 My current thinking is that I want to put together a cron job that will work 
 along these lines:
 
 - Run every 15 mins
 - take a new snapshot of the pool
 - send the snapshot to the remote system with zfs send / receive and ssh.
 (am I right in thinking I can get ssh to work with no password if I create a 
 public/private key pair? http://www.go2linux.org/ssh-login-using-no-password)
 - send an e-mail alert if zfs send / receive fails for any reason (with the 
 text of the failure message)
 - send an e-mail alert if zfs send / receive takes longer than 15 minutes and 
 clashes with the next attempt
 - delete the oldest snapshot on both systems if the send / receive worked
 
 Can anybody think of any potential problems I may have missed? 
 
 Bearing in mind I've next to no experience in bash scripting, how does the 
 following look?
 
 **
 #!/bin/bash
 
 # Prepare variables for e-mail alerts
 SUBJECT=zfs send / receive error
 EMAIL=[EMAIL PROTECTED]
 
 NEWSNAP=build filesystem + snapshot name here
 RESULTS=$(/usr/sbin/zfs snapshot $NEWSNAP)
 # how do I check for a snapshot failure here?  Just look for non blank 
 $RESULTS?
 if $RESULTS; then
# send e-mail
/bin/mail -s $SUBJECT $EMAIL $RESULTS
exit
 fi
 
 PREVIOUSSNAP=build filesystem + snapshot name here
 RESULTS=$(/usr/sbin/zfs send -i $NEWSNAP $PREVIOUSSNAP | ssh -l *user* 
 *remote-system* /usr/sbin/zfs receive *filesystem*)
 # again, how do I check for error messages here?  Do I just look for a blank 
 $RESULTS to indicate success?
 if $RESULTS ok; then
OBSOLETESNAP=build filesystem + name here
zfs destroy $OBSOLETESNAP
ssh -l *user* *remote-system* /usr/sbin/zfs destroy $OBSOLETESNAP
 else 
# send e-mail with error message
/bin/mail -s $SUBJECT $EMAIL $RESULTS
 fi
 **
 
 One concern I have is what happens if the send / receive takes longer than 15 
 minutes. Do I need to check that manually, or will the script cope with this 
 already? Can anybody confirm that it will behave as I am hoping in that the 
 script will take the next snapshot, but the send / receive will fail and 
 generate an e-mail alert?
 
 thanks,
 
 Ross
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Status of ZFS boot for sparc?

2008-09-04 Thread Enda O'Connor
Steve Goldberg wrote:
 Hi Lori,
 
 is ZFS boot still planned for S10 update 6?
 
 Thanks,
 
 Steve
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi
yes, its' in u6, I have migrated u5 ufs on svm  to zfs boot
Enda
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What is the correct procedure to replace a non failed disk for another?

2008-09-03 Thread Enda O'Connor
Mark J. Musante wrote:
 
 On 3 Sep 2008, at 05:20, F. Wessels [EMAIL PROTECTED] wrote:
 
 Hi,

 can anybody describe the correct procedure to replace a disk (in a  
 working OK state) with a another disk without degrading my pool?
 
 This command ought to do the trick:
 
 zfs replace pool old-disk new-disk
Slight typo above, it's zpool replace is the command

By the way what is the pool config, I assume you have a pool that 
supports this :-)

Once the disk is added, a resilver will occur, so do not take snapshots 
till it has finished, as the resilver will be restarted, this is fixed 
in snv_94 though.

Enda
 
 The type of pool doesn't matter.
 
 
 Regards,
 markm
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] marvell88sx patch

2008-08-14 Thread Enda O'Connor
Hi
build 93 contains all the fixes in  138053-02 it would appear.

Just to avoid confusion, patch 138053-02 is only relevant to the solaris 
10 updates, and does not apply to the opensolaris variants.

To get all the fixes for opensolaris, upgrade or install build 93.

If on solaris 10, then suggest installing 138053-02, which requires 
127127-11, the update 5 kernel patch. ( install latest patch utils patch 
first though, 119255-xx )

Enda

Martin Gasthuber wrote:
 Hi,
 
   in which opensolaris (nevada) version this fix is included 
 
 thanks,
Martin
 
 On 13 Aug, 2008, at 18:52, Bob Friesenhahn wrote:
 
 I see that a driver patch has now been released for marvell88sx
 hardware.  I expect that this is the patch that Thumper owners have
 been anxiously waiting for.  The patch ID is 138053-02.

 Bob
 ==
 Bob Friesenhahn
 [EMAIL PROTECTED], 
 http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] S10u6, zfs and zones

2008-08-05 Thread Enda O'Connor ( Sun Micro Systems Ireland)
dick hoogendijk wrote:
 My server runs S10u5. All slices are UFS. I run a couple of sparse
 zones on a seperate slice mounted on /zones.
 
 When S10u6 comes out booting of ZFS will become possible. That is great
 news. However, will it be possible to have those zones I run now too?
you can migrate pre u5 ufs to u6 zfs via lucreate, zones included.

There is no support issues for zones on a system with zfs root, that I'm aware 
of, and Lu 
( Live upgrade ) in u6 will support zones on zfs upgrade.
 I always understood ZFS and root zones are difficult. I hope to be able
 to change all FS to ZFS, including the space for the sparse zones.
zones can be on zfs or any other supported config in combination with zfs root.

Is there a specific question you had in mind with regard to sparse zones and 
zfs root, no 
too clear if I answered your actual query.

Enda
 
 Does somebody have more information on this?
 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot mirror

2008-08-02 Thread Enda O'Connor
Malachi de Ælfweald wrote:
 I just tried that, but the installgrub keeps failing:
 
 [EMAIL PROTECTED]:~# zpool status
   pool: rpool
  state: ONLINE
  scrub: resilver completed after 0h1m with 0 errors on Sat Aug  2 
 01:44:55 2008
 config:
 
 NAME  STATE READ WRITE CKSUM
 rpool ONLINE   0 0 0
   mirror  ONLINE   0 0 0
 c5t0d0s0  ONLINE   0 0 0
 c5t1d0s0  ONLINE   0 0 0
 
 errors: No known data errors
 [EMAIL PROTECTED]:~# installgrub /boot/grub/stage1 /boot/grub/stage2 
 /dev/dsk/c5t1d0s0
 cannot open/stat device /dev/dsk/c5t1d0s2
  
that should be /dev/rdsk/c5t1d0s2

Enda
 
 On Wed, May 21, 2008 at 3:19 PM, Lori Alt [EMAIL PROTECTED] 
 mailto:[EMAIL PROTECTED] wrote:
 
 
 It is also necessary to use either installboot (sparc) or
 installgrub (x86)
 to install the boot loader on the attached disk.  It is a bug that this
 is not done automatically (6668666 - zpool command should put a
 bootblock on a disk added as a mirror of a root pool vdev)
 
 Lori
 
 [EMAIL PROTECTED] wrote:
   Hi Tom,
  
   You need to use the zpool attach command, like this:
  
   # zpool attach pool-name disk1 disk2
  
   Cindy
  
   Tom Buskey wrote:
  
   I've always done a disksuite mirror of the boot disk.  It's been
 easry to do after the install in Solaris.  WIth Linux I had do do it
 during the install.
  
   OpenSolaris 2008.05 didn't give me an option.
  
   How do I add my 2nd drive to the boot zpool to make it a mirror?
  
  
   This message posted from opensolaris.org http://opensolaris.org
   ___
   zfs-discuss mailing list
   zfs-discuss@opensolaris.org mailto:zfs-discuss@opensolaris.org
   http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  
   ___
   zfs-discuss mailing list
   zfs-discuss@opensolaris.org mailto:zfs-discuss@opensolaris.org
   http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org mailto:zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 
 
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can I trust ZFS?

2008-08-01 Thread Enda O'Connor ( Sun Micro Systems Ireland)
Dave wrote:
 
 
 Enda O'Connor wrote:

 As for thumpers, once 138053-02 (  marvell88sx driver patch ) releases 
 within the next two weeks ( assuming no issues found ), then the 
 thumper platform running s10 updates will be up to date in terms of 
 marvel88sx driver fixes, which fixes some pretty important issues for 
 thumper.
 Strongly suggest applying this patch to thumpers going forward.
 u6 will have the fixes by default.

 
 I'm assuming the fixes listed in these patches are already committed in 
 OpenSolaris (b94 or greater)?
 
 -- 
 Dave
yep.
I know this is opensolaris list, but a lot of folk asking questions do seem to 
be running 
various update releases.


Enda
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can I trust ZFS?

2008-07-31 Thread Enda O'Connor
Ross wrote:
 Hey folks,
 
 I guess this is an odd question to be asking here, but I could do with some 
 feedback from anybody who's actually using ZFS in anger.
 
 I'm about to go live with ZFS in our company on a new fileserver, but I have 
 some real concerns about whether I can really trust ZFS to keep my data alive 
 if things go wrong.  This is a big step for us, we're a 100% windows company 
 and I'm really going out on a limb by pushing Solaris.
 
 The problems with zpool status hanging concern me, knowing that I can't hot 
 plug drives is an issue, and the long resilver times bug is also a potential 
 problem.  I suspect I can work around the hot plug drive bug with a big 
 warning label on the server, but knowing the pool can hang so easily makes me 
 worry about how well ZFS will handle other faults.
 
 On my drive home tonight I was wondering whether I'm going to have to swallow 
 my pride and order a hardware raid controller for this server, letting that 
 deal with the drive issues, and just using ZFS as a very basic filesystem.
 
 What has me re-considering ZFS though is that on the other hand I know the 
 Thumpers have sold well for Sun, and they pretty much have to use ZFS.  So 
 there's a big installed base out there using it, and that base has been using 
 it for a few years.  I know from the Thumper manual that you have to 
 unconfigure drives before removal on them on those servers, which goes a long 
 way towards making me think that should be a relatively safe way to work. 
 
 The question is whether I can make a server I can be confident in.  I'm now 
 planning a very basic OpenSolaris server just using ZFS as a NFS server, is 
 there anybody out there who can re-assure me that such a server can work well 
 and handle real life drive failures?
 
 thanks,
 
 Ross
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi
What kind of hardware etc is the fileserver going to be running, and 
what zpool layout is being planned.

As for thumpers, once 138053-02 (  marvell88sx driver patch ) releases 
within the next two weeks ( assuming no issues found ), then the thumper 
platform running s10 updates will be up to date in terms of marvel88sx 
driver fixes, which fixes some pretty important issues for thumper.
Strongly suggest applying this patch to thumpers going forward.
u6 will have the fixes by default.


Enda
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot - upgrade from UFS swap slices

2008-07-25 Thread Enda O'Connor
Alan Burlison wrote:
 Lori Alt wrote:
 
 It's hard to know what the right thing to do is from within
 the installation software.  Does the user want to preserve
 as much of their current environment as possible?  Or does
 the user want to move toward the new standard configuration
 (which is pretty much zfs-everything)?  Or something in between?
 
 It's all a bit academic now anyway, as LU has for some reason decided to 
 stop installing entries in menu.lst, no matter what I do.  No error 
 messages, no warnings, just doesn't work.  Bizzare - this did work at 
 one point.  I've tried blitzing and reinstalling LU entirely - still no joy.
 
probably
6722767 lucreate did not add new BE to menu.lst ( or grub )
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cannot attach mirror to SPARC zfs root pool

2008-07-24 Thread Enda O'Connor ( Sun Micro Systems Ireland)
Mike Gerdts wrote:
 On Wed, Jul 23, 2008 at 11:36 AM,  [EMAIL PROTECTED] wrote:
 Rainer,

 Sorry for your trouble.

 I'm updating the installboot example in the ZFS Admin Guide with the
 -F zfs syntax now. We'll fix the installboot man page as well.
 
 Perhaps it also deserves a mention in the FAQ somewhere near
 http://opensolaris.org/os/community/zfs/boot/zfsbootFAQ/#mirrorboot.
 
 5. How do I attach a mirror to an existing ZFS root pool?
 
 Attach the second disk to form a mirror.  In this example, c1t1d0s0 is 
 attached.
 
 # zpool attach rpool c1t0d0s0 c1t1d0s0
 
 Prior to build TBD, bug 6668666 causes the following
 platform-dependent steps to also be needed:
 
 On sparc systems:
 # installboot -F zfs /usr/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t1d0s0

should be uname -m above I think.
and path to be:
# installboot -F zfs /platform/`uname -m`/lib/fs/zfs/bootblk as path for sparc.

others might correct me though

 
 On x86 systems:
 # ...
 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cannot attach mirror to SPARC zfs root pool

2008-07-24 Thread Enda O'Connor ( Sun Micro Systems Ireland)
Enda O'Connor ( Sun Micro Systems Ireland) wrote:
 Mike Gerdts wrote:
 On Wed, Jul 23, 2008 at 11:36 AM,  [EMAIL PROTECTED] wrote:
 Rainer,

 Sorry for your trouble.

 I'm updating the installboot example in the ZFS Admin Guide with the
 -F zfs syntax now. We'll fix the installboot man page as well.

 Perhaps it also deserves a mention in the FAQ somewhere near
 http://opensolaris.org/os/community/zfs/boot/zfsbootFAQ/#mirrorboot.

 5. How do I attach a mirror to an existing ZFS root pool?

 Attach the second disk to form a mirror.  In this example, c1t1d0s0 is 
 attached.

 # zpool attach rpool c1t0d0s0 c1t1d0s0

 Prior to build TBD, bug 6668666 causes the following
 platform-dependent steps to also be needed:

 On sparc systems:
 # installboot -F zfs /usr/`uname -i`/lib/fs/zfs/bootblk 
 /dev/rdsk/c1t1d0s0
 
 should be uname -m above I think.
 and path to be:
 # installboot -F zfs /platform/`uname -m`/lib/fs/zfs/bootblk as path for 
 sparc.
 
 others might correct me though
 

 On x86 systems:
 # ...
meant to add that on x86 the following should do the trick ( again I'm open to 
correction )

installgrub /boot/grub/stage1 /zfsroot/boot/grub/stage2 /dev/rdsk/c1t0d0s0

haven't tested the z86 one though.

Enda

 
 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS boot - upgrade from UFS swap slices

2008-07-24 Thread Enda O'Connor ( Sun Micro Systems Ireland)
[EMAIL PROTECTED] wrote:
 Alan,
 
 Just make sure you use dumpadm to point to valid dump device and
 this setup should work fine. Please let us know if it doesn't.
 
 The ZFS strategy behind automatically creating separate swap and
 dump devices including the following:
 
 o Eliminates the need to create separate slices
 o Enables underlying ZFS architecture for swap and dump devices
 o Enables you to set characteristics like compression on swap
 and dump devices, and eventually, encryption
Hi
also makes resizing easy to do as well.
ie
zfs set volsize=8G lupool/dump


Enda
 
 Cindy
 
 Alan Burlison wrote:
 [EMAIL PROTECTED] wrote:

 ZFS doesn't swap to a slice in build 92. In this build, a ZFS root
 environment requires separate ZFS volumes for swap and dump devices.

 The ZFS boot/install project and information trail starts here:

 http://opensolaris.org/os/community/zfs/boot/

 Is this going to be supported in a later build?

 I got it to use the existing swap slice by manually reconfiguring the 
 ZFS-root BE post-install to use the swap slice as swap  dump - the 
 resulting BE seems to work just fine, so I'm not sure why LU insists on 
 creating ZFS swap  dump.

 Basically I want to migrate my root filesystem from UFS to ZFS and leave 
 everything else as it it, there doesn't seem to be a way to do this.

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Status of ZFS on Solaris 10

2008-07-07 Thread Enda O'Connor
Hi
S10_u5 has version 4, latest in opensolaris is version 10

see

http://opensolaris.org/os/community/zfs/version/10/

where
n=version

http://opensolaris.org/os/community/zfs/version/n/ so sub 4 for n to see 
version 4 changes, and so on up to 10.

run zpool upgrade ( doesn't actually run an upgrade, as it needs args ) 
to see the version number.

BTW s10_u6 will have version 10, as will the kernel patch that releases 
with u6 that applies to u5.

Enda
Chris Cosby wrote:
 We're running Solaris 10 U5 on lots of Sun SPARC hardware. That's ZFS 
 version=4. Simple question: how far behind is this version of ZFS as 
 compared to what is in Nevada? Just point me to the web page, I know 
 it's out there somewhere.
 
 -- 
 chris -at- microcozm -dot- net
 === Si Hoc Legere Scis Nimium Eruditionis Habes
 
 
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] is it possible to add a mirror device later?

2008-07-02 Thread Enda O'Connor
Hi Tommaso
Have a look at the man page for zfs and the attach section in 
particular, it will do the job nicely.

Enda



Tommaso Boccali wrote:
 Ciao, 
 the rot filesystem of my thumper is a ZFS with a single disk:
 
 bash-3.2# zpool status rpool
   pool: rpool
  state: ONLINE
  scrub: none requested
 config:
 
 NAMESTATE READ WRITE CKSUM
 rpool   ONLINE   0 0 0
   c5t0d0s0  ONLINE   0 0 0
 spares
   c0t7d0AVAIL   
   c1t6d0AVAIL   
   c1t7d0AVAIL   
 
 
 is it possible to add a mirror to it? I seem to be able only to add a 
 new PAIR of disks in mirror, but not to add a mirror to the existing 
 disk ...
 
 thanks
 
 tommaso
 
 
 Tommaso Boccali - CMS Experiment - INFN Pisa
 iChat/AIM/Skype/Gizmo:  tomboc73
 Mail: mailto:[EMAIL PROTECTED]
 Pisa: +390502214216  Portable: +393472563154
 CERN: +41227671545   Portable:  +41762310208
 
 
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] mirroring zfs slice

2008-06-17 Thread Enda O'Connor ( Sun Micro Systems Ireland)
Hi
Use zpool attach
from
http://docs.sun.com/app/docs/doc/819-2240/zpool-1m

zpool attach [-f] pool device new_device

 Attaches new_device to an existing zpool device. The existing device 
cannot be part 
of a raidz configuration. If device is not currently part of a mirrored 
configuration, 
device automatically transforms into a two-way mirror of device and new_device. 
If device 
is part of a two-way mirror, attaching new_device creates a three-way mirror, 
and so on. 
In either case, new_device begins to resilver immediately.


Enda
Srinivas Chadalavada wrote:
 Hi All,
 
 I had a slice with zfs file system which I want to mirror, I 
 followed the procedure mentioned in the amin guide I am getting this 
 error. Can you tell me what I did wrong?
 
  
 
 root # zpool list
 
 NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
 
 export  254G230K254G 0%  ONLINE -
 
 root # echo |format
 
 Searching for disks...done
 
  
 
  
 
 AVAILABLE DISK SELECTIONS:
 
0. c2t0d0 DEFAULT cyl 35497 alt 2 hd 255 sec 63
 
   /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci1000,[EMAIL 
 PROTECTED]/[EMAIL PROTECTED],0
 
1. c2t2d0 DEFAULT cyl 35497 alt 2 hd 255 sec 63
 
   /[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci1000,[EMAIL 
 PROTECTED]/[EMAIL PROTECTED],0
 
 Specify disk (enter its number): Specify disk (enter its number):
 
 :root # zpool create export mirror c2t0d0s5 c2t2d0s5
 
 invalid vdev specification
 
 use '-f' to override the following errors:
 
 /dev/dsk/c2t0d0s5 is part of active ZFS pool export. Please see zpool(1M).
 
  
 
 Thanks,
 
 Srini
 
 
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] /var/sadm on zfs?

2008-06-01 Thread Enda O'Connor
Jim Litchfield at Sun wrote:
 I think you'll find that any attempt to make zones (certainly whole root
 ones) will fail after this.
   

right, zoneadm install actually copies in the global zones undo.z into 
the local zone, so that patchrm of an existing patch will work.

haven't tried out what happens when the undo is missing,

but zoneadm install actually copies the undo from
/var/sadm/pkg/SUNWcsr/save/pspool/SUNWcsr/save/patch-id/undo.z
above example for just SUNWcsr.

BTW the undo under pspool is identical to the one in 
/var/sadm/pkg/SUNWcsr/save/patch-id/undo.z ( obvious waste of space 
really )

so one solution based on Mike's would be to create a symlink in the 
pspool save/patch-id  for each undo.z being moved.

Note I have not tested any of this out so beware :-)

Enda
 Jim
 ---
 Mike Gerdts wrote:
   
 On Sat, May 31, 2008 at 5:16 PM, Bob Friesenhahn
 [EMAIL PROTECTED] wrote:
   
 
 On my heavily-patched Solaris 10U4 system, the size of /var (on UFS)
 has gotten way out of hand due to the remarkably large growth of
 /var/sadm.  Can this directory tree be safely moved to a zfs
 filesystem?  How much of /var can be moved to a zfs filesystem without
 causing boot or runtime issues?
 
   
 /var/sadm is not used during boot.

 If you have been patching regularly, you probably have a bunch of
 undo.Z files that are used only in the event that you want to back
 out.  If you don't think you will be backing out any patches that were
 installed 90 or more days ago the following commands may be helpful:

 To understand how much space would be freed up by whacking the old undo 
 files:

 # find /var/sadm/pkg -mtime +90 -name undo.Z | xargs du -k \
 | nawk '{t+= $1; print $0} END {printf(Total: %d MB\n, t / 1024)}'

 Copy the old backout files somewhere else:

 # cd /var/sadm
 # find pkg -mtime +90 -name undo.Z \
  | cpio -pdv /somewhere/else

 Remove the old (90+ days) undo files

 # find /var/sadm/pkg -mtime +90 -name undo.Z | xargs rm -f

 Oops, I needed those files to back out 123456-01

 # cd /somewhere/else
 # find pkg -name undo.Z | grep 123456-01 \
  | cpio -pdv /var/sadm
 # patchrm 123456-01

 Before you do this, test it and convince yourself that it works.  I
 have not seen Sun documentation (either docs.sun.com or
 sunsolve.sun.com) that says that this is a good idea - but I haven't
 seen any better method for getting rid of the cruft that builds up in
 /var/sadm either.

 I suspect that further discussion on this topic would be best directed
 to [EMAIL PROTECTED] or sun-managers mailing list (see
 http://www.sunmanagers.org/).

   
 

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Version Correct

2008-05-20 Thread Enda O'Connor ( Sun Micro Systems Ireland)
Kenny wrote:
 Back to the top
 
 Is there a patch upgrade for ZFS on Solaris 10?  Where might I find it.

it's the kernel patch, depending on how far back you are in the update's you 
might have to 
install m ultiple Kernel Patches.

the latest one is 127127-11/127128-11 ( the u5 KU )
it depends on 120011-14/120012-14 ( the u4 kernel )
which depends on 118833-36/118855-36 the U3 kernel
Above showing sparc/x86 versions

You can get them from sunsolve.sun.com
http://sunsolve.sun.com/show.do?target=patchpage

Not sure about entitlement though, you will have to register at minimum ( no 
a/c needed as 
far as I know ), but you might need an a/c for certain patches.

Also make sure you have latest patch utils patch applied as well ( 
119254/119255 :- 
sparc/86 ), also run patchadd -a kU patchid, the -a does a dryrun, and 
doesn't update 
the system, examine output, and then drop the -a if all looks ok.
The recommended cluster ( under downloads in patch page ) has all the latest 
patches and 
requirements, might be easier to grab and work with it.

Enda
 
 TIA   --Kenny
  
  
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs snapshot -r hangs

2008-04-16 Thread Enda O'Connor ( Sun Micro Systems Ireland)

Sam Nicholson wrote:

Greetings,

snv_79a
AMD 64x2 in 64 bit kernel mode.

I'm in the middle of migrating a large zfs set from a pair of 1TB mirrors 
to a 1.3TB RAIDz.


I decided to use zfs send | zfs receive, so the first order of business 
was to snap the entire source filesystem.  


# zfs snapshot -r [EMAIL PROTECTED]

What happened was expected, the source drives flashed and wiggled :)
What happened next was not, the destination drives (or maybe the boot 
drive, as they share one disk activity light) began flashing and wiggling, 
and have been doing so for 12 hours how.


iostat shows no activity to speak of, and no transfers at all on any of the 
disks.  ditto for zpool iostat.


all zfs commands hang, and the lack of output from truss'ing the pids 
indicate they are stuck in the kernel.  Heck, I can't even reboot, as that

hangs.

So what I was wondering whether there exists a dtrace recipe or some 
such that I can use to figure out where this is hung in the kernel.


Cheers!
-sam
 
 
This message posted from opensolaris.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Hi
echo ::walk thread|::findstack!munges |mdb -k  sometestfile.txt

where munges is the script I have attached ( courtesy of David Powell I believe ), ie 
place munges somewhere on your path, and run above.


This text file might be large ( most likely will be, but the munges bit will trim it down 
sufficiently ), so examine it and see if there are any zfs related stuff in there.


That might be sufficient to get an idea of where zfs is stuck, else might need the entire 
text file.

Assuming that this actually works ( seen as reboot is apparently even stuck )

Enda
#!/bin/sh
#
# CDDL HEADER START
#
# The contents of this file are subject to the terms of the
# Common Development and Distribution License, Version 1.0 only
# (the License).  You may not use this file except in compliance
# with the License.
#
# You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
# or http://www.opensolaris.org/os/licensing.
# See the License for the specific language governing permissions
# and limitations under the License.
#
# When distributing Covered Code, include this CDDL HEADER in each
# file and include the License file at usr/src/OPENSOLARIS.LICENSE.
# If applicable, add the following below this CDDL HEADER, with the
# fields enclosed by brackets [] replaced with your own identifying
# information: Portions Copyright [] [name of copyright owner]
#
# CDDL HEADER END
#
#
# Copyright 2005 Sun Microsystems, Inc.  All rights reserved.
# Use is subject to license terms.
#

#
# Stack munging utility, written by David Powell.
#
# Takes the output of multiple ::findstack dcmds and groups similar 
# stacks together, presenting the most common ones first.  To use:
#
#  ::walk thread | ::findstack ! munges
#

foo=d
bar=
while getopts ab i; do
case $i in
b)  foo=s/\[\(.*\) ]/\1/;;
a)  bar=s/+[^(]*//;;
esac
done

sed 
/^P/ d
/(..*)$/ d
s/^s.*read \(.*:\).*$/\1/
t a
/^\[/ $foo
s/^ .* \(.*\)$/ \1/
$bar
H
$ !d
s/.*//
:a
x
1 d
s/\n//g
 | sort -t : -k 2 | uniq -c -f 1 | sort -rn  | sed '
s/) /)\
/g
s/^ *\([^ ]*\) *\(.*\): */\1##  tp: \2\
/
1 !s/^/\
/
'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris 10 x86 + ZFS / NFS server cp problem with AIX

2008-03-18 Thread Enda O'Connor
Michael Schuster wrote:
 Sachin Palav wrote:
   
 Friends,
 I have recently built a file server on x2200 with solaris x86 having zfs 
 (version4) and running NFS version2  samba.

 the AIX 5.2  AIX 5.2 client give error while running command cp -R 
 zfs_nfs_mount_source zfs_nfs_mount_desticantion as below:
 cp: 0653-440 directory/1: name too long.
 cp: 0653-438 cannot read directory directory/1.
  and the cp core dumps in AIX.
 

 I think someone from the AIX camp is probably better suited to answering 
 this, as they hopefully understand under which circumstances AIX's cp 
 would spit out this kind of error message.

 HTH
 Michael
   
Hi
seems like CR
6538383 cp -r to AIX local dir from NFS-mounted ZFS dir complains cp: 
0653-440 dir : name too long.

seesm adding -p will get past this one or using find as in:

find . | cpio -pdmu

else copied in from CR
{
There is an old problem with scandir() in libc that was fixed by the 
following APARs:
AIX 5.2: IY59427
AIX 5.3: IY60062

I can't be certain this is the same problem, but it looks highly likely. 
There is no APAR for AIX 5.1 as it is out of support. If my fix levels 
are correct, if you have installed 5.2.0.50 (or later) or 5.3.0.10 (or 
later) or any level of 6.1 then you'll have the fix.
}

Enda

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 'zfs create' hanging

2008-03-10 Thread Enda O'Connor
Paul Raines wrote:
 On Sun, 9 Mar 2008, Marc Bevand wrote:

   
 Paul Raines raines at nmr.mgh.harvard.edu writes:
 
 Mar  9 03:22:16 raidsrv03 sata: NOTICE:
 /pci at 0,0/pci1022,7458 at 1/pci11ab,11ab at 1:
 Mar  9 03:22:16 raidsrv03  port 6: device reset
 [...]

 The above repeated a few times but now seems to have stopped.
 Running 'hd -c' shows all disks as ok.  But it seems like I do have
 a disk problem.  But since everything is redundant (zraid) why a
 failed disk should lock up the machine like I saw I don't understand
 unless there is a some bigger issue.
   
 It looks like your Solaris 10U4 install on a Thumper is affected by:
 http://bugs.opensolaris.org/view_bug.do?bug_id=6587133
 Which was discussed here:
 http://opensolaris.org/jive/thread.jspa?messageID=189256
 http://opensolaris.org/jive/thread.jspa?messageID=163460

 Apply T-PATCH 127871-02, or upgrade to snv_73, or wait for 10U5.
 

 I don't find 127871-02 on the normal Patches and Updates website.
 Does someone have to go some place special for that?  Also, where
 do I find info on updating to snv_73?

 thanks

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   
Hi
Unfortunately 127871-* ,are currently feature patches used in update 5 
builds, these patches won't be released until u5 ships, so that won't be 
for another month perhaps. Very dangerous to apply these to pre u5 until 
they are shipped.
There are no sustaining patches for this issue 6579855 ( the only CR 
fixed in 127871-02 )

but the CR 6587133 mentioned above is fixed in generic patch 125205-07, 
available on SunSolve

Enda
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recommended Patches for ZFS

2008-02-29 Thread Enda O'Connor
David Jackson wrote:
 I'm looking for an authoritative list of the patches that should be  
 applied for ZFS for the commercial version of Solaris.  A  
 centralized URL that is maintained would be ideal.  Can someone  
 reply back to me with one as I'm not a subscriber to the news list.


 David Jackson
 [EMAIL PROTECTED]


 

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   
Hi
The latest patch is 127728-06, which requires 120011-14 ( the U4 kernel 
patch )
x86 would be 127729 and Ku 120012-14.

I assume that your system already has zfs, or are you looking for the 
list of patches to get zfs going in u1 or earlier


Enda


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Patch 127729-07 not NFS patch!

2008-02-29 Thread Enda O'Connor
Bob Friesenhahn wrote:
 The Sun Update Manager on my x86 Solaris 10 box describes this new 
 patch as SunOS 5.10_x86 nfs fs patch (note use of nfs) but looking 
 at the problem descriptions this is quite clearly a big ZFS patch that 
 Solaris 10 users should pay attention to since it fixes a bunch of 
 nasty bugs.

 Maybe someone can fix this fat-fingered patch description in Sun 
 Update Manager?


 Bob
 ==
 Bob Friesenhahn
 [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   
Hi Bob
Looking in to getting this changed, actually spotted this earlier on 
today myself.


Enda
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] tricking install tools with quota and reservation

2008-02-03 Thread Enda O'Connor
Christine Tran wrote:
 Hi,

 I understands the upgrade issue surrounding the patching and upgrade 
 tools. Can I get around this with some trickery using quota and 
 reservation?  I would quota and reserve for a pool/somezonepath some 
 capacity, say 10GB, and in this way allocate a fixed capacity per zonepath.

 Will this work, or will the patching  upgrade tool not even run if they 
 detect that zones are on zfs?

 CT
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   
upgrade of zones on zfs is just not supported yet
patching does wok though for zones on zfs
add 119254-50/119255-50 (SPARC/X86 ) and patching will eork

Enda
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 3510 Array and ZFS/Zones

2007-12-21 Thread Enda O'Connor ( Sun Micro Systems Ireland)
Richard Elling wrote:
 Morris Hooten wrote:
 I looked through the solarsinternals zfs best practices and not
 completly sure
 of the best scenario.
   
 
 ok, perhaps we should add some clarifications...
 
 I have a Solaris 10 6/06 Generic_125100-10 box with attached 3510 array
 and would like to use zfs on it. Should I create multiple logical disks
 thru the raid
 controller then create zfs raid file systems across the LD's?

   
 
 That method will work ok.  Many people do this with various RAID
 arrays.  We can't answer the question is it the best way? because we
 would need more detailed information on what you are trying to
 accomplish and how you want to make design trade-offs.  So for now,
 I would say it works just like you would expect.
 
 Can I also migrate zones that are on a ufs file system now into a newly
 created zfs file system
 although knowing the limitations with zones and zfs in 06/06?
   
 
 Zone limitations with ZFS should be well documented in the admin
 guides.  Currently, the install and patch process is not ZFS aware, which
 might cause you some difficulty with upgrading or patching.  There are
 alternative methods to solve this problem, but you should be aware of the
 current limitation.

the patch to fix the patch of zones on zfs is pending.
119254/119255 revision 49, we hope to release this in the coming days ( 
maybe by COB today even )
 
 Recommendations?
   
 
 Use Solaris 10 9/07.  It has more than a year's worth of improvements
 and enhancements to Solaris.
I think you mean 8/07, ( update 4 ) release?
But yes this release is most advised,
Enda
  -- richard
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [zones-discuss] 3510 Array and ZFS/Zones

2007-12-21 Thread Enda O'Connor ( Sun Micro Systems Ireland)
Mangan wrote:
 Is this a release that can be downloaded from the website and will work on 
 SPARC systems. The write up says it is for VMware. Am I missing something?
 
 
 Use Solaris 10 9/07.  It has more than a year's worth of improvements
 and enhancements to Solaris.
 -- richard
 
 ___
 zones-discuss mailing list
 [EMAIL PROTECTED]
Hi
Haven't been following this thread so I might be off topic ..

I think this should be 8/07 ( Solaris 10 update 4 )
If so then it's on the download site ( or should be ) and works for 
SPARC/x86 ( same as any Solaris 10 release )

What writeup are you looking at?


Enda
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [zones-discuss] 3510 Array and ZFS/Zones

2007-12-21 Thread Enda O'Connor ( Sun Micro Systems Ireland)
Mangan wrote:
 The 9/07 release appears to be for X86 only. The 8/07 release appears to be 
 for Sparc or X86. The 9/07 release is also titled  Express Developers 
 Edition 9/07.
 
 Apparently not a release I can use.
 
 Thanks for the quick feedback.
ok my mistake, getting confused by release numbers, 9.07 was what 
Richard meant.

Enda

 When is the next release for Sparc due out?
 
 Paul
 
 
 -Original Message-
 From: Enda O'Connor ( Sun Micro Systems Ireland) [EMAIL PROTECTED]
 Sent: Dec 21, 2007 9:15 AM
 To: Richard Elling [EMAIL PROTECTED]
 Cc: zfs-discuss@opensolaris.org, [EMAIL PROTECTED], [EMAIL PROTECTED]
 Subject: Re: [zones-discuss] [zfs-discuss] 3510 Array and ZFS/Zones

 Richard Elling wrote:
 Morris Hooten wrote:
 I looked through the solarsinternals zfs best practices and not
 completly sure
 of the best scenario.
   
 ok, perhaps we should add some clarifications...

 I have a Solaris 10 6/06 Generic_125100-10 box with attached 3510 array
 and would like to use zfs on it. Should I create multiple logical disks
 thru the raid
 controller then create zfs raid file systems across the LD's?

   
 That method will work ok.  Many people do this with various RAID
 arrays.  We can't answer the question is it the best way? because we
 would need more detailed information on what you are trying to
 accomplish and how you want to make design trade-offs.  So for now,
 I would say it works just like you would expect.

 Can I also migrate zones that are on a ufs file system now into a newly
 created zfs file system
 although knowing the limitations with zones and zfs in 06/06?
   
 Zone limitations with ZFS should be well documented in the admin
 guides.  Currently, the install and patch process is not ZFS aware, which
 might cause you some difficulty with upgrading or patching.  There are
 alternative methods to solve this problem, but you should be aware of the
 current limitation.
 the patch to fix the patch of zones on zfs is pending.
 119254/119255 revision 49, we hope to release this in the coming days ( 
 maybe by COB today even )
 Recommendations?
   
 Use Solaris 10 9/07.  It has more than a year's worth of improvements
 and enhancements to Solaris.
 I think you mean 8/07, ( update 4 ) release?
 But yes this release is most advised,
 Enda
  -- richard

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 ___
 zones-discuss mailing list
 [EMAIL PROTECTED]
 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Does Oracle support ZFS as a file system with Oracle RAC?

2007-12-18 Thread Enda O'Connor ( Sun Micro Systems Ireland)
David Runyon wrote:
 Does anyone know this?
 
 David Runyon
 Disk Sales Specialist
 
 Sun Microsystems, Inc.
 4040 Palm Drive
 Santa Clara, CA 95054 US
 Mobile 925 323-1211
 Email [EMAIL PROTECTED]
 
 
 
 
 Russ Lai wrote:
 Dave;
 Does ZFS support Oracle RAC?
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
metalink doc 403202.1 appears to support this config, but to me reads a 
little unclear.


{
Applies to:
Oracle Server - Enterprise Edition - Version: 9.2.0.5 to 10.2.0.3
Solaris Operating System (SPARC 64-bit)
Goal
Is the Zeta File System (ZFS) of Solaris 10 certified/supported by 
ORACLE for:
- Database
- RAC

Solution
Oracle certifies and support the RDBMS on the whole OS for non-RAC 
installations. However if there is an exception, this should appear on 
the Release Notes, or in the OS Oracle specific documentation manual.

As you are not specific to cluster file systems for RAC installations, 
usually there is no problem on install Oracle on the file systems 
provided by OS vendor.But if any underlying OS error is found then it 
should be handled by the OS vendor.

Over the past few years Oracle has worked with all the leading system 
and storage vendors to validate their specialized storage products, 
under the Oracle Storage Compatibility Program (OSCP), to ensure these 
products were compatible for use with the Oracle database. Under the 
OSCP, Oracle and its partners worked together to validate specialized 
storage technology including NFS file servers, remote mirroring, and 
snapshot products.

At this time Oracle believes that these three specialized storage 
technologies are well understood by the customers, are very mature, and 
the Oracle technology requirements are well know. As of January, 2007, 
Oracle will no longer validate these products.

On a related note, many Oracle customers have embraced the concept of 
the resilient low-cost storage grid defined by Oracle's Resilient 
Low-Cost Storage Initiative (leveraging the Oracle Database 10g 
Automatic Storage Management (ASM) feature to make low-cost, modular 
storage arrays resilient), and many storage vendors continue to 
introduce new, low-cost, modular arrays for an Oracle storage grid 
environment. As of January, 2007, the Resilient Low-Cost Storage 
Initiative is discontinued.

For more information on the same please refer to Oracle Storage Program 
Change Notice

}
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] safe zfs-level snapshots with a UFS-on-ZVOL filesystem?

2007-10-08 Thread Enda O'Connor ( Sun Micro Systems Ireland)
Dick Davies wrote:
 I had some trouble installing a zone on ZFS with S10u4
 (bug in the postgres packages) that went away when I  used a
 ZVOL-backed UFS filesystem
 for the zonepath.
 
Hi
Out of interest what was the bug.

Enda
 I thought I'd push on with the experiment (in the hope Live Upgrade
 would be able to upgrade such a zone).
 It's a bit unwieldy, but everything worked reasonably well -
 performance isn't much worse than straight ZFS (it gets much faster
 with compression enabled, but that's another story).
 
 The only fly in the ointment is that ZVOL level snapshots don't
 capture unsynced data up at the FS level. There's a workaround at:
 
   http://blogs.sun.com/pgdh/entry/taking_ufs_new_places_safely
 
 but I wondered if there was anything else that could be done to avoid
 having to take such measures?
 I don't want to stop writes to get a snap, and I'd really like to avoid UFS
 snapshots if at all possible.
 
 I tried mounting forcedirectio in the (mistaken) belief that this
 would bypass the UFS
 buffer cache, but it didn't help.
 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool versioning

2007-09-13 Thread Enda O'Connor ( Sun Micro Systems Ireland)
Paul Armor wrote:
 Hi,
 I was wondering if anyone would know if this is just an accounting-type 
 error with the recorded version= stored on disk, or if there 
 are/could-be any deeper issues with an upgraded zpool?
 
 I created a pool under a Sol10_x86_u3 install (11/06?), and zdb correctly 
 reported the pool as a version=3 pool.  I reinstalled the OS with a u4 
 (08/07), ran zpool grade, was told I successfully upgraded from version 3 
 to version 4, but zdb reported version=3.  I unmounted the zfs, 
 remounted, and zdb still reported version=3.  I reran zpool upgrade, and 
 was told there were no pools to upgrade.
 
 I blew away that pool, and created a new pool and zdb correctly reported 
 version=4.
 
 Perhaps I'm being pedantic, but the version thing on an upgraded pool 
 bugged me ;-)
 
 Does anyone have any thoughts/experiences on other surprises that may be 
 lying in wait on an upgraded zpool?
 
 Thanks,
 Paul
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Paul
is it not zpool upgrade -a,
but I could be wrong

I seem to remember zpool upgrade does not actually upgrade unless you 
specify the -a.

Enda
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 120473-05

2007-04-12 Thread Enda O'Connor ( Sun Micro Systems Ireland)

Robert Milkowski wrote:

Hello Enda,

Wednesday, April 11, 2007, 4:21:35 PM, you wrote:

EOCSMSI Robert Milkowski wrote:

Hello zfs-discuss,

  In order to get IDR126199-01 I need to install 120473-05 first.
  I can get 120473-07 but everything more than -05 is marked as
  incompatible with IDR126199-01 so I do not want to force it.

  Local Sun's support has problems with getting 120473-05 also so I'm
  stuck for now and I would really like to get that IDR running.

  Can someone help?



EOCSMSI Hi
EOCSMSI This patch will be on SunSolve possibly later today, tomorrow at latest
EOCSMSI I suspect as it has only justed being pushed out from testing.
EOCSMSI I have sent the patch in another mail for now.

Thank you patch - it worked (installed) along with IDR properly.

However it seems like the problem is not solved by IDR :(


Hi Robert
So this IDR has two bugs as fixed
6458218 assertion failed: ss == NULL
6495013 Loops and recursion in metaslab_ff_alloc can kill performance, 
even on a pool with lots of free data


I have add'ed the IDR's requestors as they can comment, which one of the 
above fixes was not solved via this IDR in your testing.



Enda
___
zfs-discuss mailing list
[EMAIL PROTECTED]
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 120473-05

2007-04-11 Thread Enda O'Connor ( Sun Micro Systems Ireland)

Robert Milkowski wrote:

Hello zfs-discuss,

  In order to get IDR126199-01 I need to install 120473-05 first.
  I can get 120473-07 but everything more than -05 is marked as
  incompatible with IDR126199-01 so I do not want to force it.

  Local Sun's support has problems with getting 120473-05 also so I'm
  stuck for now and I would really like to get that IDR running.

  Can someone help?



Hi
This patch will be on SunSolve possibly later today, tomorrow at latest 
I suspect as it has only justed being pushed out from testing.

I have sent the patch in another mail for now.


Enda
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: update on zfs boot support

2007-03-12 Thread Enda O'Connor

Brian Hechinger wrote:

On Sun, Mar 11, 2007 at 11:21:13AM -0700, Frank Cusack wrote:
  

On March 11, 2007 6:05:13 PM + Tim Foster [EMAIL PROTECTED] wrote:


* ability to add disks to mirror the root filesystem at any time,
  should they become available
  

Can't this be done with UFS+SVM as well?  A reboot would be required
but you have to do regular reboots anyway just for patching.



It can, but you have to plan ahead.  You need to leave a spall partition for
the SVM metadata.  Something I *never* remember to do (I'm too used to working
with Veritas).

If you can remember to plan ahead, then yes.  ;)

-brian
  
not necessarily, metainit -a -f will force all onto one disk, but should 
only be used in emergency cases really, where you are already in the 
situation of not having a partition to put the meta DB on.


Enda
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 118855-36 ZFS

2007-02-05 Thread Enda O'Connor

Robert Milkowski wrote:

Hello Robert,

Monday, February 5, 2007, 2:26:57 PM, you wrote:

RM Hello zfs-discuss,

RM   I've patched U2 system to 118855-36. Several zfs related bugs id
RM   should be covered between -19 and -36 like HotSpare support.

RM   However despite -36 is installed 'zpool upgrade' still claims only
RM   v1 and v2 support. Alse there's no zfs promote, etc.

RM   /kernel/drv/zfs is dated May 18 with 482448 in size which looks too
RM   old.

RM   Also 118855-36 has many zfs related bugs listed however in a section
RM   file I do not see zfs,zpool commands or zfs kernel modules.
RM   Looks like they are not delivered.



RM   ?


Looks like 124205-04 is needed.
While I can see it on SunSolve smpatch doesn't show it.

Also many ZFS bugs listed in 124205-04 are also listed in 118855-36 while
it looks like only 124205-04 is actually covering them and provides
necessary binaries.

Something is messed up with -36.


?

  
The KU looks ok to me, basically bugs in core functionality such as zfs, 
can and do end up in more than one patch, ie the fix might affect 
genunix/libc in the Ku and  the zfs utilites in the zfs patch. So the 
bug will be listed in the KU and the ZFS patch.


Enda
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 118855-36 ZFS

2007-02-05 Thread Enda O'Connor

Robert Milkowski wrote:

Hello Casper,

Monday, February 5, 2007, 2:32:49 PM, you wrote:

  

Hello zfs-discuss,

 I've patched U2 system to 118855-36. Several zfs related bugs id
 should be covered between -19 and -36 like HotSpare support.

 However despite -36 is installed 'zpool upgrade' still claims only
 v1 and v2 support. Alse there's no zfs promote, etc.

 /kernel/drv/zfs is dated May 18 with 482448 in size which looks too
 old.

 Also 118855-36 has many zfs related bugs listed however in a section
 file I do not see zfs,zpool commands or zfs kernel modules.
 Looks like they are not delivered.
  



CDSC Have you also installed the companion patch 124205-04?  It contains all
CDSC the ZFS bits.

I've just figured it out.

However why those bug ids related in ZFS are listed in -36 while
actually those fixes are delivered in 124205-05 (the same bug ids)?
  
because fix is spread across both KU and zfs patch I suspect. Say the 
fix affcts zpool and libc, then both Ku and zfs patch will have the bug 
listed.

Also why 'smpatch analyze' doesn't show 124205? (I can force it to
download the patch if I specify it).


  
Not too sure about smpatch, but I suspect that the file ( current.zip) 
that smpatch uses to determine of a patch is applicable has not been 
updated with the zfs patch yet. Patches will appear on SunSolve before 
they are listed by smpatch, as the build of current.zip is not done 
daily I believe.


Just a guess, as 124205-04 was released on the 31st

Enda

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 118855-36 ZFS

2007-02-05 Thread Enda O'Connor

Hi
118855-36 is marked interactive and is not installable by automation, or 
at least should not be installed by smpatch.



If you look in the
patchpro.download.directory
from smpatch get

under the dir cache ( if I remember correctly )

you will see a current.zip ( possibly with a time stamp as part of the 
name )


see if 124205 is in this file, I will check with the people responsible 
for current.zip in the mean time.

I suspect that a current.zip including this patch has not been released yet.



Enda


David W. Smith wrote:
I'm pretty sure I have a service plan, but smpatch is not returning me 
the 124205 patch.  I'm currently running Solaris 10, update 2.


Also, has anyone had problems installing 118855-36 with smpatch?  I had
issues, and ended up having to install it with patchadd.

David

On Mon, 2007-02-05 at 08:59 -0800, Joe Little wrote:
...

  

Ah.. it looks like this patch is non-public (need a service plan). So
the free as in beer version ZFS U3 bits likely won't make it until U4
into the general release.



Also why 'smpatch analyze' doesn't show 124205? (I can force it to
download the patch if I specify it).

  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [Fwd: [zones-discuss] Zone boot problems after installing patches]

2006-08-04 Thread Enda o'Connor - Sun Microsystems Ireland - Software Engineer




Hi
I guess the problem is that David is using smpatch (our automated patching
system )
So in theory he is up to date on his patches
( he has since removed 
122660-02 
 
122658-02
 
 122640-05
)

So when I install the following onto a system ( SPARC S10 FCS ) with two
zones already running:
119254-25 ( patchutilties patch )
119578-26
118822-30
118833-18
122650-02
122640-05
And reboot, I too have the same issue, there is no /dev/zfs in my local zones?

# zonename
global
# 
# cat /etc/release
 Solaris 10 1/06 s10s_u1wos_19a SPARC
 
# ls /var/sadm/patch
118822-30 119254-26 120900-04 122640-05
118833-18 119578-26 121133-02 122650-02
# uptime
 5:48pm up 2 min(s), 1 user, load average: 0.58, 0.29, 0.11
# ls /export/zones/sparse-1/dev/zfs
/export/zones/sparse-1/dev/zfs: No such file or directory
# zlogin sparse-1 ls /dev/zfs
/dev/zfs: No such file or directory
# 
I rebooted the zone and then the system, touching /reconfigure, all to no
avail
I then added the rest of the patches you suggested and rebooted my zones
and I had /dev/zfs, strange.

But David had all the patches added and still did not get /dev/zfs in the
non global zones
Enda


George Wilson wrote:
Apologies for
the internal URL, I'm including the list of patches for  the everyone's benefit: 
  
 
 
sparc Patches 
 * ZFS Patches 
 o 118833-17 SunOS 5.10: kernel patch 
 o 118925-02 SunOS 5.10: unistd header file patch 
 o 119578-20 SunOS 5.10: FMA Patch 
 o 119982-05 SunOS 5.10: ufsboot patch 
 o 120986-04 SunOS 5.10: mkfs and newfs patch 
 o 122172-06 SunOS 5.10: swap swapadd isaexec patch 
 o 122174-03 SunOS 5.10: dumpadm patch 
 o 122637-01 SunOS 5.10: zonename patch 
 o 122640-05 SunOS 5.10: zfs genesis patch 
 o 122644-01 SunOS 5.10: zfs header file patch 
 o 122646-01 SunOS 5.10: zlogin patch 
 o 122650-02 SunOS 5.10: zfs tools patch 
 o 122652-03 SunOS 5.10: zfs utilities patch 
 o 122658-02 SunOS 5.10: zonecfg patch 
 o 122660-03 SunOS 5.10: zoneadm zoneadmd patch 
 o 122662-02 SunOS 5.10: libzonecfg patch 
 * Man Pages 
 o 119246-15 SunOS 5.10: Manual Page updates for Solaris 10 
 * Other Patches 
 o 119986-03 SunOS 5.10: clri patch 
 o 123358-01 SunOS 5.10: jumpstart and live upgrade compliance 
 o 121430-11 SunOS 5.8 5.9 5.10: Live Upgrade Patch 
 
i386 Patches 
 * ZFS Patches 
 o 118344-11 SunOS 5.10_x86: Fault Manager Patch 
 o 118855-15 SunOS 5.10_x86: kernel patch 
 o 118919-16 SunOS 5.10_x86: Solaris Crypto Framework patch 
 o 120987-04 SunOS 5.10_x86: mkfs, newfs, other ufs utils patch 
 o 122173-04 SunOS 5.10_x86: swap swapadd patch 
 o 122175-03 SunOS 5.10_x86: dumpadm patch 
 o 122638-01 SunOS 5.10_x86: zonename patch 
 o 122641-06 SunOS 5.10_x86: zfs genesis patch 
 o 122647-03 SunOS 5.10_x86: zlogin patch 
 o 122653-03 SunOS 5.10_x86: utilities patch 
 o 122659-03 SunOS 5.10_x86: zonecfg patch 
 o 122661-02 SunOS 5.10_x86: zoneadm patch 
 o 122663-04 SunOS 5.10_x86: libzonecfg patch 
 o 122665-02 SunOS 5.10_x86: rnode.h/systm.h/zone.h header file 
 * Man Pages 
 o 119247-15 SunOS 5.10_x86: Manual Page updates for Solaris 10 
 * Other Patches 
 o 118997-03 SunOS 5.10_x86: format patch 
 o 119987-03 SunOS 5.10_x86: clri patch 
 o 122655-05 SunOS 5.10_x86: jumpstart and live upgrade  compliance
patch 
 o 121431-11 SunOS 5.8_x86 5.9_x86 5.10_x86: Live Upgrade Patch 
 
 
Thanks, 
George 
 
 
George Wilson wrote: 
  Dave, 
 
I'm copying the zfs-discuss alias on this as well... 
 
It's possible that not all necessary patches have been installed or they
 maybe hitting CR# 6428258. If you reboot the zone does it continue to  end
up in maintenance mode? Also do you know if the necessary ZFS/Zones  patches
have been updated? 
 
Take a look at our webpage which includes the patch list required for  Solaris
10: 
 
http://rpe.sfbay/bin/view/Tech/ZFS 
 
Thanks, 
George 
 
Mahesh Siddheshwar wrote: 
 
 
 Original Message  
Subject: [zones-discuss] Zone boot problems after installing patches 
Date: Wed, 02 Aug 2006 13:47:46 -0400 
From: Dave Bevans [EMAIL PROTECTED] 
To: zones-discuss@opensolaris.org, [EMAIL PROTECTED],  [EMAIL PROTECTED] 
  
 
 
 
Hi, 
 
I have a customer with the following problem. 
 
He has a V440 running Solaris 10 1/06 with zones. In the case notes he  says
that he installed a couple Sol 10 patches and now he has problems  booting
his zones. After doing some checking he found that it appears  to be related
to a couple of ZFS patches (122650 and 122640). I found  a bug (6271309
/ lack of zvol breaks all ZFS commands), but not sure  if it applies to this
situation. Any ideas on this. 
 
Here is the customers problem description... 
 
Hardware Platform: Sun Fire V440 
Component Affected: OS Base 
OS and Kernel Version: SunOS snb-fton-bck2 5.10 Generic_118833-18  sun4u
sparc SUNW,Sun-Fire-V440 
 
Describe the problem: Patch 122650-02 combined with patch 122640-05  seems
to have broken no global zones at boot time. I'm just guessing  at the exact
patches since they were both added recently, and involve  

Re: [zfs-discuss] Re: [Fwd: [zones-discuss] Zone boot problems after installing patches]

2006-08-04 Thread Enda o'Connor - Sun Microsystems Ireland - Software Engineer






Enda o'Connor - Sun Microsystems Ireland - Software Engineer wrote:

  
   Hi
 I guess the problem is that David is using smpatch (our automated patching 
system )
 So in theory he is up to date on his patches
 ( he has since removed 
 122660-02
 

  122658-02
  
  122640-05
 )
 
 So when I install the following onto a system ( SPARC S10 FCS ) with two 
zones already running:
typo should be update 10 1/06 not FCS

 119254-25 ( patchutilties patch )
 119578-26
 118822-30
 118833-18
 122650-02
 122640-05
 And reboot, I too have the same issue, there is no /dev/zfs in my local
zones?
 
 # zonename
 global
 # 
 # cat /etc/release
  Solaris 10 1/06 s10s_u1wos_19a SPARC
  
 # ls /var/sadm/patch
 118822-30 119254-26 120900-04 122640-05
 118833-18 119578-26 121133-02 122650-02
 # uptime
  5:48pm up 2 min(s), 1 user, load average: 0.58, 0.29, 0.11
 # ls /export/zones/sparse-1/dev/zfs
 /export/zones/sparse-1/dev/zfs: No such file or directory
 # zlogin sparse-1 ls /dev/zfs
 /dev/zfs: No such file or directory
 # 
 I rebooted the zone and then the system, touching /reconfigure, all to no 
avail
 I then added the rest of the patches you suggested and rebooted my zones 
and I had /dev/zfs, strange.
 
 But David had all the patches added and still did not get /dev/zfs in the 
non global zones
 Enda
 
 
 George Wilson wrote:
 
  Apologies for 
the internal URL, I'm including the list of patches for  the everyone's benefit:
   
  
  
 sparc Patches 
  * ZFS Patches 
  o 118833-17 SunOS 5.10: kernel patch 
  o 118925-02 SunOS 5.10: unistd header file patch 
  o 119578-20 SunOS 5.10: FMA Patch 
  o 119982-05 SunOS 5.10: ufsboot patch 
  o 120986-04 SunOS 5.10: mkfs and newfs patch 
  o 122172-06 SunOS 5.10: swap swapadd isaexec patch 
  o 122174-03 SunOS 5.10: dumpadm patch 
  o 122637-01 SunOS 5.10: zonename patch 
  o 122640-05 SunOS 5.10: zfs genesis patch 
  o 122644-01 SunOS 5.10: zfs header file patch 
  o 122646-01 SunOS 5.10: zlogin patch 
  o 122650-02 SunOS 5.10: zfs tools patch 
  o 122652-03 SunOS 5.10: zfs utilities patch 
  o 122658-02 SunOS 5.10: zonecfg patch 
  o 122660-03 SunOS 5.10: zoneadm zoneadmd patch 
  o 122662-02 SunOS 5.10: libzonecfg patch 
  * Man Pages 
  o 119246-15 SunOS 5.10: Manual Page updates for Solaris 10 
  * Other Patches 
  o 119986-03 SunOS 5.10: clri patch 
  o 123358-01 SunOS 5.10: jumpstart and live upgrade compliance

  o 121430-11 SunOS 5.8 5.9 5.10: Live Upgrade Patch 
  
 i386 Patches 
  * ZFS Patches 
  o 118344-11 SunOS 5.10_x86: Fault Manager Patch 
  o 118855-15 SunOS 5.10_x86: kernel patch 
  o 118919-16 SunOS 5.10_x86: Solaris Crypto Framework patch 
  o 120987-04 SunOS 5.10_x86: mkfs, newfs, other ufs utils patch

  o 122173-04 SunOS 5.10_x86: swap swapadd patch 
  o 122175-03 SunOS 5.10_x86: dumpadm patch 
  o 122638-01 SunOS 5.10_x86: zonename patch 
  o 122641-06 SunOS 5.10_x86: zfs genesis patch 
  o 122647-03 SunOS 5.10_x86: zlogin patch 
  o 122653-03 SunOS 5.10_x86: utilities patch 
  o 122659-03 SunOS 5.10_x86: zonecfg patch 
  o 122661-02 SunOS 5.10_x86: zoneadm patch 
  o 122663-04 SunOS 5.10_x86: libzonecfg patch 
  o 122665-02 SunOS 5.10_x86: rnode.h/systm.h/zone.h header file

  * Man Pages 
  o 119247-15 SunOS 5.10_x86: Manual Page updates for Solaris 10

  * Other Patches 
  o 118997-03 SunOS 5.10_x86: format patch 
  o 119987-03 SunOS 5.10_x86: clri patch 
  o 122655-05 SunOS 5.10_x86: jumpstart and live upgrade  compliance 
patch 
  o 121431-11 SunOS 5.8_x86 5.9_x86 5.10_x86: Live Upgrade Patch

  
  
 Thanks, 
 George 
  
  
 George Wilson wrote: 
   
Dave, 
  
 I'm copying the zfs-discuss alias on this as well... 
  
 It's possible that not all necessary patches have been installed or they 
 maybe hitting CR# 6428258. If you reboot the zone does it continue to  end 
up in maintenance mode? Also do you know if the necessary ZFS/Zones  patches 
have been updated? 
  
 Take a look at our webpage which includes the patch list required for  Solaris 
10: 
  
 http://rpe.sfbay/bin/view/Tech/ZFS
  
  
 Thanks, 
 George 
  
 Mahesh Siddheshwar wrote: 
 
   
  
  Original Message  
 Subject: [zones-discuss] Zone boot problems after installing patches

 Date: Wed, 02 Aug 2006 13:47:46 -0400 
 From: Dave Bevans [EMAIL PROTECTED] 
 To: zones-discuss@opensolaris.org,
[EMAIL PROTECTED],  [EMAIL PROTECTED]
   
  
  
  
 Hi, 
  
 I have a customer with the following problem. 
  
 He has a V440 running Solaris 10 1/06 with zones. In the case notes he 
says that he installed a couple Sol 10 patches and now he has problems  booting 
his zones. After doing some checking he found that it appears  to be related 
to a couple of ZFS patches (122650 and 122640). I found  a bug (6271309 /
lack of zvol breaks all ZFS commands), but not sure  if it applies to this 
situation. Any ideas on this. 
  
 Here is the customers problem description... 
  
 Hardware Platform: Sun Fire V440 
 Component

Re: [zfs-discuss] Re: [Fwd: [zones-discuss] Zone boot problems after installing patches]

2006-08-04 Thread Enda o'Connor - Sun Microsystems Ireland - Software Engineer




Hi
I logged CR 6457216 to track this for now.


Enda

Enda o'Connor - Sun Microsystems Ireland - Software Engineer wrote:

  
   
 
 Enda o'Connor - Sun Microsystems Ireland - Software Engineer wrote:
 
  

   Hi
  I guess the problem is that David is using smpatch (our automated patching
 system )
  So in theory he is up to date on his patches
  ( he has since removed 
  122660-02 

 
122658-02
   
   122640-05
  )
  
  So when I install the following onto a system ( SPARC S10 FCS ) with two
 zones already running:
 typo should be update 10 1/06 not FCS
 
  
  119254-25 ( patchutilties patch )
  119578-26
  118822-30
  118833-18
  122650-02
  122640-05
  And reboot, I too have the same issue, there is no /dev/zfs in my local 
zones?
  
  # zonename
  global
  # 
  # cat /etc/release
   Solaris 10 1/06 s10s_u1wos_19a SPARC
   
  # ls /var/sadm/patch
  118822-30 119254-26 120900-04 122640-05
  118833-18 119578-26 121133-02 122650-02
  # uptime
   5:48pm up 2 min(s), 1 user, load average: 0.58, 0.29, 0.11
  # ls /export/zones/sparse-1/dev/zfs
  /export/zones/sparse-1/dev/zfs: No such file or directory
  # zlogin sparse-1 ls /dev/zfs
  /dev/zfs: No such file or directory
  # 
  I rebooted the zone and then the system, touching /reconfigure, all to
no  avail
  I then added the rest of the patches you suggested and rebooted my zones
 and I had /dev/zfs, strange.
  
  But David had all the patches added and still did not get /dev/zfs in the
 non global zones
  Enda
  
  
  George Wilson wrote:
 
Apologies
for  the internal URL, I'm including the list of patches for  the everyone's
benefit:
   
   
  sparc Patches 
   * ZFS Patches 
   o 118833-17 SunOS 5.10: kernel patch 
   o 118925-02 SunOS 5.10: unistd header file patch 
   o 119578-20 SunOS 5.10: FMA Patch 
   o 119982-05 SunOS 5.10: ufsboot patch 
   o 120986-04 SunOS 5.10: mkfs and newfs patch 
   o 122172-06 SunOS 5.10: swap swapadd isaexec patch 
   o 122174-03 SunOS 5.10: dumpadm patch 
   o 122637-01 SunOS 5.10: zonename patch 
   o 122640-05 SunOS 5.10: zfs genesis patch 
   o 122644-01 SunOS 5.10: zfs header file patch 
   o 122646-01 SunOS 5.10: zlogin patch 
   o 122650-02 SunOS 5.10: zfs tools patch 
   o 122652-03 SunOS 5.10: zfs utilities patch 
   o 122658-02 SunOS 5.10: zonecfg patch 
   o 122660-03 SunOS 5.10: zoneadm zoneadmd patch 
   o 122662-02 SunOS 5.10: libzonecfg patch 
   * Man Pages 
   o 119246-15 SunOS 5.10: Manual Page updates for Solaris 10 
   * Other Patches 
   o 119986-03 SunOS 5.10: clri patch 
   o 123358-01 SunOS 5.10: jumpstart and live upgrade compliance 

   o 121430-11 SunOS 5.8 5.9 5.10: Live Upgrade Patch 
   
  i386 Patches 
   * ZFS Patches 
   o 118344-11 SunOS 5.10_x86: Fault Manager Patch 
   o 118855-15 SunOS 5.10_x86: kernel patch 
   o 118919-16 SunOS 5.10_x86: Solaris Crypto Framework patch 
   o 120987-04 SunOS 5.10_x86: mkfs, newfs, other ufs utils patch 

   o 122173-04 SunOS 5.10_x86: swap swapadd patch 
   o 122175-03 SunOS 5.10_x86: dumpadm patch 
   o 122638-01 SunOS 5.10_x86: zonename patch 
   o 122641-06 SunOS 5.10_x86: zfs genesis patch 
   o 122647-03 SunOS 5.10_x86: zlogin patch 
   o 122653-03 SunOS 5.10_x86: utilities patch 
   o 122659-03 SunOS 5.10_x86: zonecfg patch 
   o 122661-02 SunOS 5.10_x86: zoneadm patch 
   o 122663-04 SunOS 5.10_x86: libzonecfg patch 
   o 122665-02 SunOS 5.10_x86: rnode.h/systm.h/zone.h header file 

   * Man Pages 
   o 119247-15 SunOS 5.10_x86: Manual Page updates for Solaris 10 

   * Other Patches 
   o 118997-03 SunOS 5.10_x86: format patch 
   o 119987-03 SunOS 5.10_x86: clri patch 
   o 122655-05 SunOS 5.10_x86: jumpstart and live upgrade  compliance
 patch 
   o 121431-11 SunOS 5.8_x86 5.9_x86 5.10_x86: Live Upgrade Patch 

   
   
  Thanks, 
  George 
   
   
  George Wilson wrote: 
 
  Dave, 
   
  I'm copying the zfs-discuss alias on this as well... 
   
  It's possible that not all necessary patches have been installed or they
  maybe hitting CR# 6428258. If you reboot the zone does it continue to 
end  up in maintenance mode? Also do you know if the necessary ZFS/Zones
 patches  have been updated? 
   
  Take a look at our webpage which includes the patch list required for 
Solaris  10: 
   
  http://rpe.sfbay/bin/view/Tech/ZFS 
  
   
  Thanks, 
  George 
   
  Mahesh Siddheshwar wrote: 
 
 
   
   Original Message  
  Subject: [zones-discuss] Zone boot problems after installing patches 

  Date: Wed, 02 Aug 2006 13:47:46 -0400 
  From: Dave Bevans [EMAIL PROTECTED] 
  To: zones-discuss@opensolaris.org, 
[EMAIL PROTECTED],  [EMAIL PROTECTED] 
   
   
   
   
  Hi, 
   
  I have a customer with the following problem. 
   
  He has a V440 running Solaris 10 1/06 with zones. In the case notes he
 says that he installed a couple Sol 10 patches and now he has problems 
booting  his zones. After doing some checking

[zfs-discuss] Re: query re share and zfs

2006-07-04 Thread Enda o'Connor - Sun Microsystems Ireland - Software Engineer

Slight typo

I had to run

# zfs umount tank
cannot unmount 'tank': not currently mounted
# zfs umount /export/home1
# zfs umount /export/home
#

in order to get zpool destroy to run


Enda

Enda o'Connor - Sun Microsystems Ireland - Software Engineer wrote:


Hi
I was trying to overlay a pool onto an existing mount



# cat /etc/release
  Solaris 10 6/06 s10s_u2wos_09a SPARC
   
# df -k /export
Filesystemkbytesused   avail capacity  Mounted on
/dev/dsk/c1t0d0s320174761 3329445 1664356917%/export
# share
#
#zpool create -f tank raidz c1t1d0 c1t2d0 c1t3d0 c1t4d0 c1t5d0zfs 
create tank/home

#zfs create tank/home1
#zfs set mountpoint=/export tank
cannot mount '/export': directory is not empty
use legacy mountpoint to allow this behavior, or use the -O flag
#zfs set sharenfs=on tank/home
#zfs set sharenfs=on tank/home1
# share
-   /export/home   rw-   /export/home1   
rw#



Now I ran the following to force the mount

# df -k /export
Filesystemkbytesused   avail capacity  Mounted on
/dev/dsk/c1t0d0s320174761 3329445 1664356917%/export
# zfs mount -O tank
# df -k /export
Filesystemkbytesused   avail capacity  Mounted on
tank 701890560  53 701890286 1%/export
#

Then further down the line I tried
# zpool destroy tank
cannot unshare 'tank/home': /export/home: not shared
cannot unshare 'tank/home1': /export/home1: not shared
could not destroy 'tank': could not unmount datasets
#

I eventually got this to go with
# zfs umount tank/home
# zfs umount tank/home1
# zpool destroy -f tank
#

Is this normal, and if so why?


Enda







___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] assertion failure when destroy zpool on tmpfs

2006-06-27 Thread Enda o'Connor - Sun Microsystems Ireland - Software Engineer




Hi
Looks like same stack as 6413847, although it is pointed more towards hardware
failure.

the stack below is from 5.11 snv_38, but also seems to affect update 2 as
per above bug.

Enda

Thomas Maier-Komor wrote:

  Hi,

my colleage is just testing ZFS and created a zpool which had a backing store file on a TMPFS filesystem. After deleting the file everything still worked normally. But destroying the pool caused an assertion failure and a panic. I know this is neither a real-live szenario nor a good idea. The assertion failure occured on Solaris 10 update 2.

Below is some mdb output, in case someone is interested in this.

BTW: great to have Solaris 10 update 2 with ZFS. I can't wait to deploy it.

Cheers,
Tom

  
  
::panicinfo

  
   cpu1
  thread  2a100ea7cc0
 message 
assertion failed: vdev_config_sync(rvd, txg) == 0, file: ../../common/fs/zfs/spa
.c, line: 2149
  tstate   4480001601
  g1  30037505c40
  g2   10
  g32
  g42
  g53
  g6   16
  g7  2a100ea7cc0
  o0  11eb1e8
  o1  2a100ea7928
  o2  306f5b0
  o3  30037505c50
  o4  3c7a000
  o5   15
  o6  2a100ea6ff1
  o7  105e560
  pc  104220c
 npc  1042210
   y   10 
  
  
::stack

  
  vpanic(11eb1e8, 13f01d8, 13f01f8, 865, 600026d4ef0, 60002793ac0)
assfail+0x7c(13f01d8, 13f01f8, 865, 183e000, 11eb000, 0)
spa_sync+0x190(60001f244c0, 3dd9, 600047f3500, 0, 2a100ea7cc4, 2a100ea7cbc)
txg_sync_thread+0x130(60001f9c580, 3dd9, 2a100ea7ab0, 60001f9c6a0, 60001f9c692, 
60001f9c690)
thread_start+4(60001f9c580, 0, 0, 0, 0, 0)
  
  
::status

  
  debugging crash dump vmcore.0 (64-bit) from ai
operating system: 5.11 snv_38 (sun4u)
panic message: 
assertion failed: vdev_config_sync(rvd, txg) == 0, file: ../../common/fs/zfs/spa
.c, line: 2149
dump content: kernel pages only
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss