Re: [zfs-discuss] zpool import hangs

2008-11-05 Thread Jens Hamisch
Hi Erik,
hi Victor,


I have exactly the same problem as you described in your thread.
Could you please explain to me what to do to recover the data
on the pool?


Thanks in advance,
Jens Hamisch
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] migrating ufs to zfs - cant boot system

2008-11-05 Thread Tomas Ögren
On 05 November, 2008 - Krzys sent me these 18K bytes:

 
 I am not sure what I did wrong but I did follow up all the steps to get my 
 system moved from ufs to zfs and not I am unable to boot it... can anyone 
 suggest what I could do to fix it?
 
 here are all my steps:
 
 [00:26:38] @adas: /root  zpool create rootpool c1t1d0s0
 [00:26:57] @adas: /root  lucreate -c ufsBE -n zfsBE -p rootpool
 Analyzing system configuration.
 Comparing source boot environment ufsBE file systems with the file
 system(s) you specified for the new boot environment. Determining which
 file systems should be in the new boot environment.
 Updating boot environment description database on all BEs.
 Updating system configuration files.
 The device /dev/dsk/c1t1d0s0 is not a root device for any boot environment; 
 cannot get BE ID.
 Creating configuration for boot environment zfsBE.
 Source boot environment is ufsBE.
 Creating boot environment zfsBE.
 Creating file systems on boot environment zfsBE.
 Creating zfs file system for / in zone global on rootpool/ROOT/zfsBE.
 Populating file systems on boot environment zfsBE.
 Checking selection integrity.
 Integrity check OK.
 Populating contents of mount point /.
 Copying.
 Bus Error - core dumped

This should have cought both your attention and lucreate's attention..

If the copying process core dumps, then I guess most bets are off..

 Creating shared file system mount points.
 Creating compare databases for boot environment zfsBE.
 Creating compare database for file system /var.
 Creating compare database for file system /usr.
 Creating compare database for file system /rootpool/ROOT.
 Creating compare database for file system /.
 Updating compare databases on boot environment zfsBE.
 Making boot environment zfsBE bootable.
 Population of boot environment zfsBE successful.
 Creation of boot environment zfsBE successful.

/Tomas
-- 
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] migrating ufs to zfs - cant boot system

2008-11-05 Thread Enda O'Connor
On 11/05/08 13:02, Krzys wrote:
 I am not sure what I did wrong but I did follow up all the steps to get my 
 system moved from ufs to zfs and not I am unable to boot it... can anyone 
 suggest what I could do to fix it?
 
 here are all my steps:
 
 [00:26:38] @adas: /root  zpool create rootpool c1t1d0s0
 [00:26:57] @adas: /root  lucreate -c ufsBE -n zfsBE -p rootpool
 Analyzing system configuration.
 Comparing source boot environment ufsBE file systems with the file
 system(s) you specified for the new boot environment. Determining which
 file systems should be in the new boot environment.
 Updating boot environment description database on all BEs.
 Updating system configuration files.
 The device /dev/dsk/c1t1d0s0 is not a root device for any boot environment; 
 cannot get BE ID.
 Creating configuration for boot environment zfsBE.
 Source boot environment is ufsBE.
 Creating boot environment zfsBE.
 Creating file systems on boot environment zfsBE.
 Creating zfs file system for / in zone global on rootpool/ROOT/zfsBE.
 Populating file systems on boot environment zfsBE.
 Checking selection integrity.
 Integrity check OK.
 Populating contents of mount point /.
 Copying.
 Bus Error - core dumped
hmm above might be relevant I'd guess.

What release are you on , ie is this Solaris 10, or is this Nevada build?

Enda
 Creating shared file system mount points.
 Creating compare databases for boot environment zfsBE.
 Creating compare database for file system /var.
 Creating compare database for file system /usr.
 Creating compare database for file system /rootpool/ROOT.
 Creating compare database for file system /.
 Updating compare databases on boot environment zfsBE.
 Making boot environment zfsBE bootable.
 Population of boot environment zfsBE successful.
 Creation of boot environment zfsBE successful.
 [01:19:36] @adas: /root 
 Nov  5 02:44:16 adas root:  = [EMAIL PROTECTED] 
 =com.sun.cc.platform.clientsignature.CNSSignException: Error reading private 
 key
 Nov  5 02:44:16 adas 7470:error:0906D06C:PEM routines:PEM_read_bio:no start 
 line:/on10/build-nd/F10U6B7A/usr/src/common/openssl/crypto/pem/pem_lib.c:637:Expecting:
  
 ANY PRIVATE KEY
 Nov  5 02:44:16 adasat 
 com.sun.cc.platform.clientsignature.CNSClientSignature.throwError(Unknown 
 Source)
 Nov  5 02:44:16 adasat 
 com.sun.cc.platform.clientsignature.CNSClientSignature.init(Unknown Source)
 Nov  5 02:44:16 adasat 
 com.sun.cc.platform.clientsignature.CNSClientSignature.genSigString(Unknown 
 Source)
 Nov  5 02:44:16 adas root:  = [EMAIL PROTECTED] = 
 at com.sun.patchpro.util.Downloader.connectToURL(Downloader.java:430)
 Nov  5 02:44:16 adasat 
 com.sun.patchpro.util.CachingDownloader.establishConnection(CachingDownloader.java:618)
 Nov  5 02:44:16 adasat 
 com.sun.patchpro.util.CachingDownloader.setSourceURL(CachingDownloader.java:282)
 Nov  5 02:44:16 adasat 
 com.sun.patchpro.util.CachingDownloader.setupCache(CachingDownloader.java:208)
 Nov  5 02:44:16 adasat 
 com.sun.patchpro.util.CachingDownloader.init(CachingDownloader.java:187)
 Nov  5 02:44:16 adas root:  = [EMAIL PROTECTED] = 
 at 
 com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadFile(UnifiedServerPatchServiceProvider.java:1242)
 Nov  5 02:44:16 adasat 
 com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadDatabaseFile(UnifiedServerPatchServiceProvider.java:928)
 Nov  5 02:44:16 adasat 
 com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadPatchDB(UnifiedServerPatchServiceProvider.java:468)
 Nov  5 02:44:16 adasat 
 com.sun.patchpro.server.PatchServerProxy.downloadPatchDB(PatchServerProxy.java:156)
 Nov  5 02:44:16 adasat 
 com.sun.patchpro.database.MemoryPatchDBBuilder.downloadPatchDBWithPOST(MemoryPatchDBBuilder.java:163)
 Nov  5 02:44:16 adas root:  = [EMAIL PROTECTED] = 
 at 
 com.sun.patchpro.database.MemoryPatchDBBuilder.downloadPatchDB(MemoryPatchDBBuilder.java:752)
 Nov  5 02:44:16 adasat 
 com.sun.patchpro.database.MemoryPatchDBBuilder.buildDB(MemoryPatchDBBuilder.java:108)
 Nov  5 02:44:16 adasat 
 com.sun.patchpro.database.MemoryPatchDBBuilder.buildDB(MemoryPatchDBBuilder.java:181)
 Nov  5 02:44:16 adasat 
 com.sun.patchpro.database.GroupPatchDBBuilder.buildDB(GroupPatchDBBuilder.java:108)
 Nov  5 02:44:16 adasat 
 com.sun.patchpro.model.PatchProModel.downloadPatchDB(PatchProModel.java:1849)
 Nov  5 02:44:16 adas root:  = [EMAIL PROTECTED] 
 =nullat 
 com.sun.patchpro.model.PatchProStateMachine$5.run(PatchProStateMachine.java:277)
 Nov  5 02:44:16 adasat com.sun.patchpro.util.State.run(State.java:266)
 Nov  5 02:44:16 adasat java.lang.Thread.run(Thread.java:595)
 Nov  5 02:44:17 adas root:  = [EMAIL PROTECTED] 
 =com.sun.cc.platform.clientsignature.CNSSignException: Error reading private 
 key
 Nov  5 02:44:17 adas 7470:error:0906D06C:PEM routines:PEM_read_bio:no start 
 line:/on10/build-nd/F10U6B7A/usr/src/common/openssl/crypto/pem/pem_lib.c:637:Expecting:
  
 

Re: [zfs-discuss] migrating ufs to zfs - cant boot system

2008-11-05 Thread Krzys
Yes, I did notice that error too, but when I did lustatus it did show as it was 
ok, so I guess I did asume it was safe to start from it, but even booting up 
from original disk caused problems and I was unable to boot my system...


ANyway I did poweroff my system for few minutes, and then started it up and it 
did boot without any problems to original disk, I just had to do hard reset on 
the box for some reason.


On Wed, 5 Nov 2008, Tomas Ögren wrote:


On 05 November, 2008 - Krzys sent me these 18K bytes:



I am not sure what I did wrong but I did follow up all the steps to get my
system moved from ufs to zfs and not I am unable to boot it... can anyone
suggest what I could do to fix it?

here are all my steps:

[00:26:38] @adas: /root  zpool create rootpool c1t1d0s0
[00:26:57] @adas: /root  lucreate -c ufsBE -n zfsBE -p rootpool
Analyzing system configuration.
Comparing source boot environment ufsBE file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device /dev/dsk/c1t1d0s0 is not a root device for any boot environment;
cannot get BE ID.
Creating configuration for boot environment zfsBE.
Source boot environment is ufsBE.
Creating boot environment zfsBE.
Creating file systems on boot environment zfsBE.
Creating zfs file system for / in zone global on rootpool/ROOT/zfsBE.
Populating file systems on boot environment zfsBE.
Checking selection integrity.
Integrity check OK.
Populating contents of mount point /.
Copying.
Bus Error - core dumped


This should have cought both your attention and lucreate's attention..

If the copying process core dumps, then I guess most bets are off..


Creating shared file system mount points.
Creating compare databases for boot environment zfsBE.
Creating compare database for file system /var.
Creating compare database for file system /usr.
Creating compare database for file system /rootpool/ROOT.
Creating compare database for file system /.
Updating compare databases on boot environment zfsBE.
Making boot environment zfsBE bootable.
Population of boot environment zfsBE successful.
Creation of boot environment zfsBE successful.


/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se


!DSPAM:122,49119ff929530021468!
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] migrating ufs to zfs - cant boot system

2008-11-05 Thread Krzys
Sorry its Solaris 10 U6, not Nevada. I just upgraded to U6 and was hoping I 
could take advantage of the zfs boot mirroring.

On Wed, 5 Nov 2008, Enda O'Connor wrote:

 On 11/05/08 13:02, Krzys wrote:
 I am not sure what I did wrong but I did follow up all the steps to get my 
 system moved from ufs to zfs and not I am unable to boot it... can anyone 
 suggest what I could do to fix it?
 
 here are all my steps:
 
 [00:26:38] @adas: /root  zpool create rootpool c1t1d0s0
 [00:26:57] @adas: /root  lucreate -c ufsBE -n zfsBE -p rootpool
 Analyzing system configuration.
 Comparing source boot environment ufsBE file systems with the file
 system(s) you specified for the new boot environment. Determining which
 file systems should be in the new boot environment.
 Updating boot environment description database on all BEs.
 Updating system configuration files.
 The device /dev/dsk/c1t1d0s0 is not a root device for any boot 
 environment; cannot get BE ID.
 Creating configuration for boot environment zfsBE.
 Source boot environment is ufsBE.
 Creating boot environment zfsBE.
 Creating file systems on boot environment zfsBE.
 Creating zfs file system for / in zone global on 
 rootpool/ROOT/zfsBE.
 Populating file systems on boot environment zfsBE.
 Checking selection integrity.
 Integrity check OK.
 Populating contents of mount point /.
 Copying.
 Bus Error - core dumped
 hmm above might be relevant I'd guess.

 What release are you on , ie is this Solaris 10, or is this Nevada build?

 Enda
 Creating shared file system mount points.
 Creating compare databases for boot environment zfsBE.
 Creating compare database for file system /var.
 Creating compare database for file system /usr.
 Creating compare database for file system /rootpool/ROOT.
 Creating compare database for file system /.
 Updating compare databases on boot environment zfsBE.
 Making boot environment zfsBE bootable.
 Population of boot environment zfsBE successful.
 Creation of boot environment zfsBE successful.
 [01:19:36] @adas: /root 
 Nov  5 02:44:16 adas root:  = 
 [EMAIL PROTECTED] 
 =com.sun.cc.platform.clientsignature.CNSSignException: Error reading 
 private key
 Nov  5 02:44:16 adas 7470:error:0906D06C:PEM routines:PEM_read_bio:no start 
 line:/on10/build-nd/F10U6B7A/usr/src/common/openssl/crypto/pem/pem_lib.c:637:Expecting:
  
 ANY PRIVATE KEY
 Nov  5 02:44:16 adasat 
 com.sun.cc.platform.clientsignature.CNSClientSignature.throwError(Unknown 
 Source)
 Nov  5 02:44:16 adasat 
 com.sun.cc.platform.clientsignature.CNSClientSignature.init(Unknown 
 Source)
 Nov  5 02:44:16 adasat 
 com.sun.cc.platform.clientsignature.CNSClientSignature.genSigString(Unknown 
 Source)
 Nov  5 02:44:16 adas root:  = 
 [EMAIL PROTECTED] = at 
 com.sun.patchpro.util.Downloader.connectToURL(Downloader.java:430)
 Nov  5 02:44:16 adasat 
 com.sun.patchpro.util.CachingDownloader.establishConnection(CachingDownloader.java:618)
 Nov  5 02:44:16 adasat 
 com.sun.patchpro.util.CachingDownloader.setSourceURL(CachingDownloader.java:282)
 Nov  5 02:44:16 adasat 
 com.sun.patchpro.util.CachingDownloader.setupCache(CachingDownloader.java:208)
 Nov  5 02:44:16 adasat 
 com.sun.patchpro.util.CachingDownloader.init(CachingDownloader.java:187)
 Nov  5 02:44:16 adas root:  = 
 [EMAIL PROTECTED] = at 
 com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadFile(UnifiedServerPatchServiceProvider.java:1242)
 Nov  5 02:44:16 adasat 
 com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadDatabaseFile(UnifiedServerPatchServiceProvider.java:928)
 Nov  5 02:44:16 adasat 
 com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadPatchDB(UnifiedServerPatchServiceProvider.java:468)
 Nov  5 02:44:16 adasat 
 com.sun.patchpro.server.PatchServerProxy.downloadPatchDB(PatchServerProxy.java:156)
 Nov  5 02:44:16 adasat 
 com.sun.patchpro.database.MemoryPatchDBBuilder.downloadPatchDBWithPOST(MemoryPatchDBBuilder.java:163)
 Nov  5 02:44:16 adas root:  = 
 [EMAIL PROTECTED] = at 
 com.sun.patchpro.database.MemoryPatchDBBuilder.downloadPatchDB(MemoryPatchDBBuilder.java:752)
 Nov  5 02:44:16 adasat 
 com.sun.patchpro.database.MemoryPatchDBBuilder.buildDB(MemoryPatchDBBuilder.java:108)
 Nov  5 02:44:16 adasat 
 com.sun.patchpro.database.MemoryPatchDBBuilder.buildDB(MemoryPatchDBBuilder.java:181)
 Nov  5 02:44:16 adasat 
 com.sun.patchpro.database.GroupPatchDBBuilder.buildDB(GroupPatchDBBuilder.java:108)
 Nov  5 02:44:16 adasat 
 com.sun.patchpro.model.PatchProModel.downloadPatchDB(PatchProModel.java:1849)
 Nov  5 02:44:16 adas root:  = 
 [EMAIL PROTECTED] =nullat 
 com.sun.patchpro.model.PatchProStateMachine$5.run(PatchProStateMachine.java:277)
 Nov  5 02:44:16 adasat com.sun.patchpro.util.State.run(State.java:266)
 Nov  5 02:44:16 adasat java.lang.Thread.run(Thread.java:595)
 Nov  5 02:44:17 adas root:  = 
 [EMAIL PROTECTED] 
 =com.sun.cc.platform.clientsignature.CNSSignException: Error reading 
 

[zfs-discuss] ZFS on emcpower0a and labels

2008-11-05 Thread dmagda
Hello,

A question on putting ZFS on EMC pseuo-devices:

I have a T1000 where we were given 100 GB of SAN space from EMC:

# format  /dev/null
Searching for disks...done


AVAILABLE DISK SELECTIONS:
   0. c0t0d0 SUN72G cyl 14087 alt 2 hd 24 sec 424
  /[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0
   1. c0t1d0 SUN72G cyl 14087 alt 2 hd 24 sec 424
  /[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0
   2. c1t5006016030602568d0 DGC-RAID5-0219 cyl 51198 alt 2 hd 256 sec
16
  /[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0/[EMAIL 
PROTECTED],0
   3. c1t5006016830602568d0 DGC-RAID5-0219 cyl 51198 alt 2 hd 256 sec
16
  /[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0/[EMAIL 
PROTECTED],0
   4. emcpower0a DGC-RAID5-0219 cyl 51198 alt 2 hd 256 sec 16
  /pseudo/[EMAIL PROTECTED]
Specify disk (enter its number):

# powermt display dev=all
Pseudo name=emcpower0a
CLARiiON ID=APM00052300875 [.HOSTNAME.]
Logical device ID=60060160B1221300084781BEAFAADD11 [LUN 87]
state=alive; policy=BasicFailover; priority=0; queued-IOs=0
Owner: default=SP A, current=SP A   Array failover mode: 1
==
 Host ---   - Stor -   -- I/O Path -  -- Stats
---
###  HW PathI/O PathsInterf.   ModeState  Q-IOs
Errors
==
3073 [EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 
c1t5006016030602568d0s0 SP A0 active 
alive  0  0
3073 [EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0 
c1t5006016830602568d0s0 SP B0 active 
alive  0  0

When I tried to create a pool on the straight device I got an error:

# zpool create ldom-sparc-111 emcpower0a
cannot open '/dev/dsk/emcpower0a': I/O error

#  zpool create ldom-sparc-111 emcpower0a
[...]
open(/dev/zfs, O_RDWR)= 3
open(/etc/mnttab, O_RDONLY)   = 4
open(/etc/dfs/sharetab, O_RDONLY) Err#2 ENOENT
stat64(/dev/dsk/emcpower0as2, 0xFFBFB2D8) Err#2 ENOENT
stat64(/dev/dsk/emcpower0a, 0xFFBFB2D8)   = 0
brk(0x000B2000) = 0
open(/dev/dsk/emcpower0a, O_RDONLY)   Err#5 EIO
fstat64(2, 0xFFBF9F90)  = 0
cannot open 'write(2,  c a n n o t   o p e n  .., 13) = 13
/dev/dsk/emcpower0awrite(2,  / d e v / d s k / e m c.., 19)   = 19
': write(2,  ' :  , 3)= 3
I/O errorwrite(2,  I / O   e r r o r, 9)  = 9

write(2, \n, 1)   = 1
close(3)= 0
llseek(4, 0, SEEK_CUR)  = 0
close(4)= 0
brk(0x000C2000) = 0
_exit(1)

I then put a label on it, and things work fine:

Current partition table (original):
Total disk cylinders available: 51198 + 2 (reserved cylinders)

Part  TagFlag Cylinders SizeBlocks
  0usrwm   0 - 51174   99.95GB(51175/0/0) 209612800
  1 unassignedwu   00 (0/0/0) 0
  2 backupwu   0 - 51197  100.00GB(51198/0/0) 209707008
  3 unassignedwm   00 (0/0/0) 0
  4 unassignedwm   00 (0/0/0) 0
  5 unassignedwm   00 (0/0/0) 0
  6 unassignedwm   00 (0/0/0) 0
  7 unassignedwm   00 (0/0/0) 0

# zpool status
  pool: ldom-sparc-111
 state: ONLINE
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
ldom-sparc-111  ONLINE   0 0 0
  emcpower0a  ONLINE   0 0 0

errors: No known data errors

We have another T1000 with SAN space as well, and I don't remember having
to label the disk (though I could be mis-remembering):

Total disk sectors available: 524271582 + 16384 (reserved sectors)

Part  TagFlag First Sector Size Last Sector
  0usrwm34  249.99GB  524271582
  1 unassignedwm 0   0   0
  2 unassignedwm 0   0   0
  3 unassignedwm 0   0   0
  4 unassignedwm 0   0   0
  5 unassignedwm 0   0   0
  6 unassignedwm 0   0   0
  8   reservedwm 5242715838.00MB  524287966

# zpool status
  pool: ldom-sparc-110
 state: ONLINE
 scrub: none requested
config:

NAME  STATE READ 

Re: [zfs-discuss] migrating ufs to zfs - cant boot system

2008-11-05 Thread Enda O'Connor
Hi Krzys
Also some info on the actual system
ie what was it upgraded to u6 from and how.
and an idea of how the filesystems are laid out, ie is usr seperate from 
/ and so on ( maybe a df -k ). Don't appear to have any zones installed, 
just to confirm.
Enda

On 11/05/08 14:07, Enda O'Connor wrote:
 Hi
 did you get a core dump?
 would be nice to see the core file to get an idea of what dumped core,
 might configure coreadm if not already done
 run coreadm first, if the output looks like
 
 # coreadm
  global core file pattern: /var/crash/core.%f.%p
  global core file content: default
init core file pattern: core
init core file content: default
 global core dumps: enabled
per-process core dumps: enabled
   global setid core dumps: enabled
  per-process setid core dumps: disabled
  global core dump logging: enabled
 
 then all should be good, and cores should appear in /var/crash
 
 otherwise the following should configure coreadm:
 coreadm -g /var/crash/core.%f.%p
 coreadm -G all
 coreadm -e global
 coreadm -e per-process
 
 
 coreadm -u to load the new settings without rebooting.
 
 also might need to set the size of the core dump via
 ulimit -c unlimited
 check ulimit -a first.
 
 then rerun test and check /var/crash for core dump.
 
 If that fails a truss via say truss -fae -o /tmp/truss.out lucreate -c 
 ufsBE -n zfsBE -p rootpool
 
 might give an indication, look for SIGBUS in the truss log
 
 NOTE, that you might want to reset the coreadm and ulimit for coredumps 
 after this, in order to not risk filling the system with coredumps in 
 the case of some utility coredumping in a loop say.
 
 
 Enda
 On 11/05/08 13:46, Krzys wrote:

 On Wed, 5 Nov 2008, Enda O'Connor wrote:

 On 11/05/08 13:02, Krzys wrote:
 I am not sure what I did wrong but I did follow up all the steps to 
 get my system moved from ufs to zfs and not I am unable to boot 
 it... can anyone suggest what I could do to fix it?

 here are all my steps:

 [00:26:38] @adas: /root  zpool create rootpool c1t1d0s0
 [00:26:57] @adas: /root  lucreate -c ufsBE -n zfsBE -p rootpool
 Analyzing system configuration.
 Comparing source boot environment ufsBE file systems with the file
 system(s) you specified for the new boot environment. Determining which
 file systems should be in the new boot environment.
 Updating boot environment description database on all BEs.
 Updating system configuration files.
 The device /dev/dsk/c1t1d0s0 is not a root device for any boot 
 environment; cannot get BE ID.
 Creating configuration for boot environment zfsBE.
 Source boot environment is ufsBE.
 Creating boot environment zfsBE.
 Creating file systems on boot environment zfsBE.
 Creating zfs file system for / in zone global on 
 rootpool/ROOT/zfsBE.
 Populating file systems on boot environment zfsBE.
 Checking selection integrity.
 Integrity check OK.
 Populating contents of mount point /.
 Copying.
 Bus Error - core dumped
 hmm above might be relevant I'd guess.

 What release are you on , ie is this Solaris 10, or is this Nevada 
 build?

 Enda
 Creating shared file system mount points.
 Creating compare databases for boot environment zfsBE.
 Creating compare database for file system /var.
 Creating compare database for file system /usr.
 Creating compare database for file system /rootpool/ROOT.
 Creating compare database for file system /.
 Updating compare databases on boot environment zfsBE.
 Making boot environment zfsBE bootable.

 Anyway I did restart the whole process again, and I got again that Bus 
 Error

 [07:59:01] [EMAIL PROTECTED]: /root  zpool create rootpool c1t1d0s0
 [07:59:22] [EMAIL PROTECTED]: /root  zfs set compression=on rootpool/ROOT
 cannot open 'rootpool/ROOT': dataset does not exist
 [07:59:27] [EMAIL PROTECTED]: /root  zfs set compression=on rootpool
 [07:59:31] [EMAIL PROTECTED]: /root  lucreate -c ufsBE -n zfsBE -p rootpool
 Analyzing system configuration.
 Comparing source boot environment ufsBE file systems with the file
 system(s) you specified for the new boot environment. Determining which
 file systems should be in the new boot environment.
 Updating boot environment description database on all BEs.
 Updating system configuration files.
 The device /dev/dsk/c1t1d0s0 is not a root device for any boot 
 environment; cannot get BE ID.
 Creating configuration for boot environment zfsBE.
 Source boot environment is ufsBE.
 Creating boot environment zfsBE.
 Creating file systems on boot environment zfsBE.
 Creating zfs file system for / in zone global on 
 rootpool/ROOT/zfsBE.
 Populating file systems on boot environment zfsBE.
 Checking selection integrity.
 Integrity check OK.
 Populating contents of mount point /.
 Copying.
 Bus Error - core dumped
 Creating shared file system mount points.
 Creating compare databases for boot environment zfsBE.
 Creating compare database for file system /var.
 Creating compare database for file system /usr.



 

Re: [zfs-discuss] Is there a baby thumper?

2008-11-05 Thread Gary Mills
On Tue, Nov 04, 2008 at 05:48:26PM -0600, Tim wrote:
 
Well, what's the end goal?  What are you testing for that you need
from the thumper?
I/O interfaces?  CPU?  Chipset?  If you need *everything* you don't
have any other choice.

I suppose that something with SATA disks and the same disk controller
would be most suitable.

-- 
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] migrating ufs to zfs - cant boot system

2008-11-05 Thread Enda O'Connor
Hi
did you get a core dump?
would be nice to see the core file to get an idea of what dumped core,
might configure coreadm if not already done
run coreadm first, if the output looks like

# coreadm
  global core file pattern: /var/crash/core.%f.%p
  global core file content: default
init core file pattern: core
init core file content: default
 global core dumps: enabled
per-process core dumps: enabled
   global setid core dumps: enabled
  per-process setid core dumps: disabled
  global core dump logging: enabled

then all should be good, and cores should appear in /var/crash

otherwise the following should configure coreadm:
coreadm -g /var/crash/core.%f.%p
coreadm -G all
coreadm -e global
coreadm -e per-process


coreadm -u to load the new settings without rebooting.

also might need to set the size of the core dump via
ulimit -c unlimited
check ulimit -a first.

then rerun test and check /var/crash for core dump.

If that fails a truss via say truss -fae -o /tmp/truss.out lucreate -c 
ufsBE -n zfsBE -p rootpool

might give an indication, look for SIGBUS in the truss log

NOTE, that you might want to reset the coreadm and ulimit for coredumps 
after this, in order to not risk filling the system with coredumps in 
the case of some utility coredumping in a loop say.


Enda
On 11/05/08 13:46, Krzys wrote:
 
 On Wed, 5 Nov 2008, Enda O'Connor wrote:
 
 On 11/05/08 13:02, Krzys wrote:
 I am not sure what I did wrong but I did follow up all the steps to get my 
 system moved from ufs to zfs and not I am unable to boot it... can anyone 
 suggest what I could do to fix it?

 here are all my steps:

 [00:26:38] @adas: /root  zpool create rootpool c1t1d0s0
 [00:26:57] @adas: /root  lucreate -c ufsBE -n zfsBE -p rootpool
 Analyzing system configuration.
 Comparing source boot environment ufsBE file systems with the file
 system(s) you specified for the new boot environment. Determining which
 file systems should be in the new boot environment.
 Updating boot environment description database on all BEs.
 Updating system configuration files.
 The device /dev/dsk/c1t1d0s0 is not a root device for any boot 
 environment; cannot get BE ID.
 Creating configuration for boot environment zfsBE.
 Source boot environment is ufsBE.
 Creating boot environment zfsBE.
 Creating file systems on boot environment zfsBE.
 Creating zfs file system for / in zone global on 
 rootpool/ROOT/zfsBE.
 Populating file systems on boot environment zfsBE.
 Checking selection integrity.
 Integrity check OK.
 Populating contents of mount point /.
 Copying.
 Bus Error - core dumped
 hmm above might be relevant I'd guess.

 What release are you on , ie is this Solaris 10, or is this Nevada build?

 Enda
 Creating shared file system mount points.
 Creating compare databases for boot environment zfsBE.
 Creating compare database for file system /var.
 Creating compare database for file system /usr.
 Creating compare database for file system /rootpool/ROOT.
 Creating compare database for file system /.
 Updating compare databases on boot environment zfsBE.
 Making boot environment zfsBE bootable.
 
 Anyway I did restart the whole process again, and I got again that Bus Error
 
 [07:59:01] [EMAIL PROTECTED]: /root  zpool create rootpool c1t1d0s0
 [07:59:22] [EMAIL PROTECTED]: /root  zfs set compression=on rootpool/ROOT
 cannot open 'rootpool/ROOT': dataset does not exist
 [07:59:27] [EMAIL PROTECTED]: /root  zfs set compression=on rootpool
 [07:59:31] [EMAIL PROTECTED]: /root  lucreate -c ufsBE -n zfsBE -p rootpool
 Analyzing system configuration.
 Comparing source boot environment ufsBE file systems with the file
 system(s) you specified for the new boot environment. Determining which
 file systems should be in the new boot environment.
 Updating boot environment description database on all BEs.
 Updating system configuration files.
 The device /dev/dsk/c1t1d0s0 is not a root device for any boot environment; 
 cannot get BE ID.
 Creating configuration for boot environment zfsBE.
 Source boot environment is ufsBE.
 Creating boot environment zfsBE.
 Creating file systems on boot environment zfsBE.
 Creating zfs file system for / in zone global on rootpool/ROOT/zfsBE.
 Populating file systems on boot environment zfsBE.
 Checking selection integrity.
 Integrity check OK.
 Populating contents of mount point /.
 Copying.
 Bus Error - core dumped
 Creating shared file system mount points.
 Creating compare databases for boot environment zfsBE.
 Creating compare database for file system /var.
 Creating compare database for file system /usr.
 
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


-- 
Enda O'Connor x19781  Software Product Engineering
Patch System Test : Ireland : x19781/353-1-8199718
___
zfs-discuss mailing list

[zfs-discuss] Race condition yields to kernel panic (u3, u4) or hanging zfs commands (u5)

2008-11-05 Thread Andreas Koppenhoefer
Hello,

occasionally we got some solaris 10 server to panic in zfs code while doing
zfs send -i [EMAIL PROTECTED] [EMAIL PROTECTED] | ssh remote zfs receive 
poolname.
The race condition(s) get triggered by a broken data transmission or killing 
sending zfs or ssh command.
The panic or hanging zfs commands occurs on receiving end.

I've tried to reproduce the conditions with a small (quickdirty) script.
You'll find it as an attached file. After starting the enclosed script with
./zfs-hardness-test.sh /var/tmp
it does the following:
- Creates a zpool on files in /var/tmp/...
- Copies some files to that zpool and creates snapshot A.
- Copies more files into zpool and creates another snapshot B.
- zfs send snapA fileA
- zfs send -i snapA snapB fileAB
- destroys zpool and creates new/empty zpool
- zfs receive of snapA from fileA via ssh localhost
At this point the script enters a endless loop of...
- zfs receive of incremental snapAB from file AB via ssh localhost
- zfs destroy snapB (if exists)
- zfs list
- restart loop

The key point is that receiving of incremental snapAB will break at some random 
point, except for the first iteration where a full receive will be done. For 
breaking things the script sends a SIGINT to zfs and ssh commands. This 
simulates a broken data transmission.

I suppose on busy (or slow) server you'll have more chance to trigger the 
problem. On all testes system, sparc and x86, solaris will panic or hang in 
zfs receive  zfs list.

Is this a (two?) known bug(s). Are there patches available?

This is the output of uname -a;head -1 /etc/release of some systems tested:
SunOS vs104is1 5.10 Generic_127112-02 i86pc i386 i86pc
Solaris 10 8/07 s10x_u4wos_12b X86
SunOS qachpi10 5.10 Generic_137112-02 i86pc i386 i86pc
Solaris 10 5/08 s10x_u5wos_10 X86
SunOS qacult10 5.10 Generic_137111-08 sun4u sparc SUNW,Ultra-5_10
   Solaris 10 5/08 s10s_u5wos_10 SPARC
SunOS qacpp03 5.10 Generic_127111-05 sun4us sparc FJSV,GPUSC-M
   Solaris 10 11/06 s10s_u3wos_10 SPARC
SunOS qacult31 5.10 Generic_127127-11 sun4u sparc SUNW,Ultra-30
   Solaris 10 5/08 s10s_u5wos_10 SPARC

Disclaimer:
DO NOT USE THE ATTACHED SCRIPT ON PRODUCTIVE SERVER. USE IT AT YOUR OWN RISK. 
USE IT ON A TEST SERVER ONLY. IT'S LIKELY THAT YOU WILL DAMAGE YOUR SERVER OR 
YOUR DATA BY RUNNING THIS SCRIPT.

- Andreas
-- 
This message posted from opensolaris.org

zfs-hardness-test.sh
Description: Binary data
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] migrating ufs to zfs - cant boot system

2008-11-05 Thread Krzys


On Wed, 5 Nov 2008, Enda O'Connor wrote:

 On 11/05/08 13:02, Krzys wrote:
 I am not sure what I did wrong but I did follow up all the steps to get my 
 system moved from ufs to zfs and not I am unable to boot it... can anyone 
 suggest what I could do to fix it?
 
 here are all my steps:
 
 [00:26:38] @adas: /root  zpool create rootpool c1t1d0s0
 [00:26:57] @adas: /root  lucreate -c ufsBE -n zfsBE -p rootpool
 Analyzing system configuration.
 Comparing source boot environment ufsBE file systems with the file
 system(s) you specified for the new boot environment. Determining which
 file systems should be in the new boot environment.
 Updating boot environment description database on all BEs.
 Updating system configuration files.
 The device /dev/dsk/c1t1d0s0 is not a root device for any boot 
 environment; cannot get BE ID.
 Creating configuration for boot environment zfsBE.
 Source boot environment is ufsBE.
 Creating boot environment zfsBE.
 Creating file systems on boot environment zfsBE.
 Creating zfs file system for / in zone global on 
 rootpool/ROOT/zfsBE.
 Populating file systems on boot environment zfsBE.
 Checking selection integrity.
 Integrity check OK.
 Populating contents of mount point /.
 Copying.
 Bus Error - core dumped
 hmm above might be relevant I'd guess.

 What release are you on , ie is this Solaris 10, or is this Nevada build?

 Enda
 Creating shared file system mount points.
 Creating compare databases for boot environment zfsBE.
 Creating compare database for file system /var.
 Creating compare database for file system /usr.
 Creating compare database for file system /rootpool/ROOT.
 Creating compare database for file system /.
 Updating compare databases on boot environment zfsBE.
 Making boot environment zfsBE bootable.

Anyway I did restart the whole process again, and I got again that Bus Error

[07:59:01] [EMAIL PROTECTED]: /root  zpool create rootpool c1t1d0s0
[07:59:22] [EMAIL PROTECTED]: /root  zfs set compression=on rootpool/ROOT
cannot open 'rootpool/ROOT': dataset does not exist
[07:59:27] [EMAIL PROTECTED]: /root  zfs set compression=on rootpool
[07:59:31] [EMAIL PROTECTED]: /root  lucreate -c ufsBE -n zfsBE -p rootpool
Analyzing system configuration.
Comparing source boot environment ufsBE file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device /dev/dsk/c1t1d0s0 is not a root device for any boot environment; 
cannot get BE ID.
Creating configuration for boot environment zfsBE.
Source boot environment is ufsBE.
Creating boot environment zfsBE.
Creating file systems on boot environment zfsBE.
Creating zfs file system for / in zone global on rootpool/ROOT/zfsBE.
Populating file systems on boot environment zfsBE.
Checking selection integrity.
Integrity check OK.
Populating contents of mount point /.
Copying.
Bus Error - core dumped
Creating shared file system mount points.
Creating compare databases for boot environment zfsBE.
Creating compare database for file system /var.
Creating compare database for file system /usr.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] compression on for zpool boot disk?

2008-11-05 Thread Robert Milkowski
Hello Krzys,

Wednesday, November 5, 2008, 5:41:16 AM, you wrote:

K compression is not supported for rootpool?

K # zpool create rootpool c1t1d0s0
K # zfs set compression=gzip-9 rootpool
K # lucreate -c ufsBE -n zfsBE -p rootpool
K Analyzing system configuration.
K ERROR: ZFS pool rootpool does not support boot environments
K #

K why? are there any plans to have compression on that disk available? how 
about
K encryption will that be available on zfs boot disk at some point too?

only lzjb compression is supported on rpools now.


-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] migrating ufs to zfs - cant boot system

2008-11-05 Thread Krzys
Great, I will follow this, but I was wondering maybe I did not setup my disc 
correctly? from what I do understand zpool cannot be setup on whole disk as 
other pools are so I did partition my disk so all the space is in s0 slice. 
Maybe I thats not correct?

[10:03:45] [EMAIL PROTECTED]: /root  format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
0. c1t0d0 SEAGATE-ST3146807LC-0007 cyl 49780 alt 2 hd 8 sec 720
   /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
1. c1t1d0 SUN36G cyl 24620 alt 2 hd 27 sec 107
   /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
Specify disk (enter its number): 1
selecting c1t1d0
[disk formatted]
/dev/dsk/c1t1d0s0 is part of active ZFS pool rootpool. Please see zpool(1M).
/dev/dsk/c1t1d0s2 is part of active ZFS pool rootpool. Please see zpool(1M).


FORMAT MENU:
 disk   - select a disk
 type   - select (define) a disk type
 partition  - select (define) a partition table
 current- describe the current disk
 format - format and analyze the disk
 repair - repair a defective sector
 label  - write label to the disk
 analyze- surface analysis
 defect - defect list management
 backup - search for backup labels
 verify - read and display labels
 save   - save new disk/partition definitions
 inquiry- show vendor, product and revision
 volname- set 8-character volume name
 !cmd - execute cmd, then return
 quit
format verify

Primary label contents:

Volume name = 
ascii name  = SUN36G cyl 24620 alt 2 hd 27 sec 107
pcyl= 24622
ncyl= 24620
acyl=2
nhead   =   27
nsect   =  107
Part  TagFlag Cylinders SizeBlocks
   0   rootwm   0 - 24619   33.92GB(24620/0/0) 71127180
   1 unassignedwu   00 (0/0/0)0
   2 backupwm   0 - 24619   33.92GB(24620/0/0) 71127180
   3 unassignedwu   00 (0/0/0)0
   4 unassignedwu   00 (0/0/0)0
   5 unassignedwu   00 (0/0/0)0
   6 unassignedwu   00 (0/0/0)0
   7 unassignedwu   00 (0/0/0)0

format


On Wed, 5 Nov 2008, Enda O'Connor wrote:

 Hi
 did you get a core dump?
 would be nice to see the core file to get an idea of what dumped core,
 might configure coreadm if not already done
 run coreadm first, if the output looks like

 # coreadm
 global core file pattern: /var/crash/core.%f.%p
 global core file content: default
   init core file pattern: core
   init core file content: default
global core dumps: enabled
   per-process core dumps: enabled
  global setid core dumps: enabled
 per-process setid core dumps: disabled
 global core dump logging: enabled

 then all should be good, and cores should appear in /var/crash

 otherwise the following should configure coreadm:
 coreadm -g /var/crash/core.%f.%p
 coreadm -G all
 coreadm -e global
 coreadm -e per-process


 coreadm -u to load the new settings without rebooting.

 also might need to set the size of the core dump via
 ulimit -c unlimited
 check ulimit -a first.

 then rerun test and check /var/crash for core dump.

 If that fails a truss via say truss -fae -o /tmp/truss.out lucreate -c ufsBE 
 -n zfsBE -p rootpool

 might give an indication, look for SIGBUS in the truss log

 NOTE, that you might want to reset the coreadm and ulimit for coredumps after 
 this, in order to not risk filling the system with coredumps in the case of 
 some utility coredumping in a loop say.


 Enda
 On 11/05/08 13:46, Krzys wrote:
 
 On Wed, 5 Nov 2008, Enda O'Connor wrote:
 
 On 11/05/08 13:02, Krzys wrote:
 I am not sure what I did wrong but I did follow up all the steps to get 
 my system moved from ufs to zfs and not I am unable to boot it... can 
 anyone suggest what I could do to fix it?
 
 here are all my steps:
 
 [00:26:38] @adas: /root  zpool create rootpool c1t1d0s0
 [00:26:57] @adas: /root  lucreate -c ufsBE -n zfsBE -p rootpool
 Analyzing system configuration.
 Comparing source boot environment ufsBE file systems with the file
 system(s) you specified for the new boot environment. Determining which
 file systems should be in the new boot environment.
 Updating boot environment description database on all BEs.
 Updating system configuration files.
 The device /dev/dsk/c1t1d0s0 is not a root device for any boot 
 environment; cannot get BE ID.
 Creating configuration for boot environment zfsBE.
 Source boot environment is ufsBE.
 Creating boot environment zfsBE.
 Creating file systems on boot environment zfsBE.
 Creating zfs file system for / in zone global 

Re: [zfs-discuss] migrating ufs to zfs - cant boot system

2008-11-05 Thread Enda O'Connor
Hi
No that should be fine, as long as disk is SMI labelled then that's 
fine, and lU would have failed much earlier if it found an EFI labelled 
disk.

core dump is not due to this, something else is causing that.
Enda

On 11/05/08 15:14, Krzys wrote:
 Great, I will follow this, but I was wondering maybe I did not setup my 
 disc correctly? from what I do understand zpool cannot be setup on whole 
 disk as other pools are so I did partition my disk so all the space is 
 in s0 slice. Maybe I thats not correct?
 
 [10:03:45] [EMAIL PROTECTED]: /root  format
 Searching for disks...done
 
 
 AVAILABLE DISK SELECTIONS:
0. c1t0d0 SEAGATE-ST3146807LC-0007 cyl 49780 alt 2 hd 8 sec 720
   /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
1. c1t1d0 SUN36G cyl 24620 alt 2 hd 27 sec 107
   /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
 Specify disk (enter its number): 1
 selecting c1t1d0
 [disk formatted]
 /dev/dsk/c1t1d0s0 is part of active ZFS pool rootpool. Please see 
 zpool(1M).
 /dev/dsk/c1t1d0s2 is part of active ZFS pool rootpool. Please see 
 zpool(1M).
 
 
 FORMAT MENU:
 disk   - select a disk
 type   - select (define) a disk type
 partition  - select (define) a partition table
 current- describe the current disk
 format - format and analyze the disk
 repair - repair a defective sector
 label  - write label to the disk
 analyze- surface analysis
 defect - defect list management
 backup - search for backup labels
 verify - read and display labels
 save   - save new disk/partition definitions
 inquiry- show vendor, product and revision
 volname- set 8-character volume name
 !cmd - execute cmd, then return
 quit
 format verify
 
 Primary label contents:
 
 Volume name = 
 ascii name  = SUN36G cyl 24620 alt 2 hd 27 sec 107
 pcyl= 24622
 ncyl= 24620
 acyl=2
 nhead   =   27
 nsect   =  107
 Part  TagFlag Cylinders SizeBlocks
   0   rootwm   0 - 24619   33.92GB(24620/0/0) 71127180
   1 unassignedwu   00 (0/0/0)0
   2 backupwm   0 - 24619   33.92GB(24620/0/0) 71127180
   3 unassignedwu   00 (0/0/0)0
   4 unassignedwu   00 (0/0/0)0
   5 unassignedwu   00 (0/0/0)0
   6 unassignedwu   00 (0/0/0)0
   7 unassignedwu   00 (0/0/0)0
 
 format
 
 
 On Wed, 5 Nov 2008, Enda O'Connor wrote:
 
 Hi
 did you get a core dump?
 would be nice to see the core file to get an idea of what dumped core,
 might configure coreadm if not already done
 run coreadm first, if the output looks like

 # coreadm
 global core file pattern: /var/crash/core.%f.%p
 global core file content: default
   init core file pattern: core
   init core file content: default
global core dumps: enabled
   per-process core dumps: enabled
  global setid core dumps: enabled
 per-process setid core dumps: disabled
 global core dump logging: enabled

 then all should be good, and cores should appear in /var/crash

 otherwise the following should configure coreadm:
 coreadm -g /var/crash/core.%f.%p
 coreadm -G all
 coreadm -e global
 coreadm -e per-process


 coreadm -u to load the new settings without rebooting.

 also might need to set the size of the core dump via
 ulimit -c unlimited
 check ulimit -a first.

 then rerun test and check /var/crash for core dump.

 If that fails a truss via say truss -fae -o /tmp/truss.out lucreate -c 
 ufsBE -n zfsBE -p rootpool

 might give an indication, look for SIGBUS in the truss log

 NOTE, that you might want to reset the coreadm and ulimit for 
 coredumps after this, in order to not risk filling the system with 
 coredumps in the case of some utility coredumping in a loop say.


 Enda
 On 11/05/08 13:46, Krzys wrote:

 On Wed, 5 Nov 2008, Enda O'Connor wrote:

 On 11/05/08 13:02, Krzys wrote:
 I am not sure what I did wrong but I did follow up all the steps to 
 get my system moved from ufs to zfs and not I am unable to boot 
 it... can anyone suggest what I could do to fix it?

 here are all my steps:

 [00:26:38] @adas: /root  zpool create rootpool c1t1d0s0
 [00:26:57] @adas: /root  lucreate -c ufsBE -n zfsBE -p rootpool
 Analyzing system configuration.
 Comparing source boot environment ufsBE file systems with the file
 system(s) you specified for the new boot environment. Determining 
 which
 file systems should be in the new boot environment.
 Updating boot environment description database on all BEs.
 Updating system configuration files.
 The device 

Re: [zfs-discuss] migrating ufs to zfs - cant boot system

2008-11-05 Thread Krzys
I did upgrade my U5 to U6 from DVD, went trough the upgrade process.
my file system is setup as follow:
[10:11:54] [EMAIL PROTECTED]: /root  df -h | egrep -v 
platform|sharefs|objfs|mnttab|proc|ctfs|devices|fd|nsr
Filesystem size   used  avail capacity  Mounted on
/dev/dsk/c1t0d0s0   16G   7.2G   8.4G47%/
swap   8.3G   1.5M   8.3G 1%/etc/svc/volatile
/dev/dsk/c1t0d0s6   16G   8.7G   6.9G56%/usr
/dev/dsk/c1t0d0s1   16G   2.5G13G17%/var
swap   8.5G   229M   8.3G 3%/tmp
swap   8.3G40K   8.3G 1%/var/run
/dev/dsk/c1t0d0s7   78G   1.2G76G 2%/export/home
rootpool33G19K21G 1%/rootpool
rootpool/ROOT   33G18K21G 1%/rootpool/ROOT
rootpool/ROOT/zfsBE 33G31M21G 1%/.alt.tmp.b-UUb.mnt
/export/home78G   1.2G76G 2% 
/.alt.tmp.b-UUb.mnt/export/home
/rootpool   21G19K21G 1%/.alt.tmp.b-UUb.mnt/rootpool
/rootpool/ROOT  21G18K21G 1% 
/.alt.tmp.b-UUb.mnt/rootpool/ROOT
swap   8.3G 0K   8.3G 0%/.alt.tmp.b-UUb.mnt/var/run
swap   8.3G 0K   8.3G 0%/.alt.tmp.b-UUb.mnt/tmp
[10:12:00] [EMAIL PROTECTED]: /root 


so I have /, /usr, /var and /export/home on that primary disk. Original disk is 
140gb, this new one is only 36gb, but disk utilization on that primary disk is 
much less utilized so easily should fit on it.

/ 7.2GB
/usr 8.7GB
/var 2.5GB
/export/home 1.2GB
total space 19.6GB
I did notice that lucreate did alocate 8GB to SWAP and 4GB to DUMP
total space needed 31.6GB
seems like total available disk space on my disk should be 33.92GB
so its quite close as both numbers do approach. So to make sure I will change 
disk for 72gb and will try again. I do not beleive that I need to match my main 
disk size as 146gb as I am not using that much disk space on it. But let me try 
this and it might be why I am getting this problem...



On Wed, 5 Nov 2008, Enda O'Connor wrote:

 Hi Krzys
 Also some info on the actual system
 ie what was it upgraded to u6 from and how.
 and an idea of how the filesystems are laid out, ie is usr seperate from / 
 and so on ( maybe a df -k ). Don't appear to have any zones installed, just 
 to confirm.
 Enda

 On 11/05/08 14:07, Enda O'Connor wrote:
 Hi
 did you get a core dump?
 would be nice to see the core file to get an idea of what dumped core,
 might configure coreadm if not already done
 run coreadm first, if the output looks like
 
 # coreadm
  global core file pattern: /var/crash/core.%f.%p
  global core file content: default
init core file pattern: core
init core file content: default
 global core dumps: enabled
per-process core dumps: enabled
   global setid core dumps: enabled
  per-process setid core dumps: disabled
  global core dump logging: enabled
 
 then all should be good, and cores should appear in /var/crash
 
 otherwise the following should configure coreadm:
 coreadm -g /var/crash/core.%f.%p
 coreadm -G all
 coreadm -e global
 coreadm -e per-process
 
 
 coreadm -u to load the new settings without rebooting.
 
 also might need to set the size of the core dump via
 ulimit -c unlimited
 check ulimit -a first.
 
 then rerun test and check /var/crash for core dump.
 
 If that fails a truss via say truss -fae -o /tmp/truss.out lucreate -c 
 ufsBE -n zfsBE -p rootpool
 
 might give an indication, look for SIGBUS in the truss log
 
 NOTE, that you might want to reset the coreadm and ulimit for coredumps 
 after this, in order to not risk filling the system with coredumps in the 
 case of some utility coredumping in a loop say.
 
 
 Enda
 On 11/05/08 13:46, Krzys wrote:
 
 On Wed, 5 Nov 2008, Enda O'Connor wrote:
 
 On 11/05/08 13:02, Krzys wrote:
 I am not sure what I did wrong but I did follow up all the steps to get 
 my system moved from ufs to zfs and not I am unable to boot it... can 
 anyone suggest what I could do to fix it?
 
 here are all my steps:
 
 [00:26:38] @adas: /root  zpool create rootpool c1t1d0s0
 [00:26:57] @adas: /root  lucreate -c ufsBE -n zfsBE -p rootpool
 Analyzing system configuration.
 Comparing source boot environment ufsBE file systems with the file
 system(s) you specified for the new boot environment. Determining which
 file systems should be in the new boot environment.
 Updating boot environment description database on all BEs.
 Updating system configuration files.
 The device /dev/dsk/c1t1d0s0 is not a root device for any boot 
 environment; cannot get BE ID.
 Creating configuration for boot environment zfsBE.
 Source boot environment is ufsBE.
 Creating boot environment zfsBE.
 Creating file systems on boot environment zfsBE.
 Creating zfs file system for / in zone global on 
 rootpool/ROOT/zfsBE.
 Populating file systems on boot environment zfsBE.
 Checking selection 

Re: [zfs-discuss] FYI - proposing storage pm project

2008-11-05 Thread Jane . Chu
Nathan Kroenert wrote:

 Not wanting to hijack this thread, but...

 I'm a simple man with simple needs. I'd like to be able to manually 
 spin down my disks whenever I want to...

 Anyone come up with a way to do this? ;)

You can do that, but you'd need to write your own user application
to issue an PM_DIRECT_PM ioctl command.  See pm(7D) man page.
Personally I wouldn't recommand this partly because I've never
heard of any practice of it and am unsure of potential consequences.

-jane


 Nathan.

 Jens Elkner wrote:

 On Mon, Nov 03, 2008 at 02:54:10PM -0800, Yuan Chu wrote:
 Hi,
  

   a disk may take seconds or
   even tens of seconds to come on line if it needs to be powered up
   and spin up.


 Yes - I really hate this on my U40 and tried to disable PM for HDD[s]
 completely. However, haven't found a way to do this (thought
 /etc/power.conf is the right place, but either it doesn't work as
 explained or is not the right place).

 HDD[s] are HITACHI HDS7225S Revision: A9CA

 Any hints, how to switch off PM for this HDD?

 Regards,
 jel.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] migrating ufs to zfs - cant boot system

2008-11-05 Thread Enda O'Connor
Hi
Looks ok, some mounts left over from pervious fail.
In regards to swap and dump on zpool you can set them
zfs set volsize=1G rootpool/dump
zfs set volsize=1G rootpool/swap

for instance, of course above are only an example of how to do it.
or make the zvol doe rootpool/dump etc before lucreate, in which case it 
will take the swap and dump size you have preset.

But I think we need to see the coredump/truss at this point to get an 
idea of where things went wrong.
Enda

On 11/05/08 15:38, Krzys wrote:
 I did upgrade my U5 to U6 from DVD, went trough the upgrade process.
 my file system is setup as follow:
 [10:11:54] [EMAIL PROTECTED]: /root  df -h | egrep -v 
 platform|sharefs|objfs|mnttab|proc|ctfs|devices|fd|nsr
 Filesystem size   used  avail capacity  Mounted on
 /dev/dsk/c1t0d0s0   16G   7.2G   8.4G47%/
 swap   8.3G   1.5M   8.3G 1%/etc/svc/volatile
 /dev/dsk/c1t0d0s6   16G   8.7G   6.9G56%/usr
 /dev/dsk/c1t0d0s1   16G   2.5G13G17%/var
 swap   8.5G   229M   8.3G 3%/tmp
 swap   8.3G40K   8.3G 1%/var/run
 /dev/dsk/c1t0d0s7   78G   1.2G76G 2%/export/home
 rootpool33G19K21G 1%/rootpool
 rootpool/ROOT   33G18K21G 1%/rootpool/ROOT
 rootpool/ROOT/zfsBE 33G31M21G 1%/.alt.tmp.b-UUb.mnt
 /export/home78G   1.2G76G 2% 
 /.alt.tmp.b-UUb.mnt/export/home
 /rootpool   21G19K21G 1%
 /.alt.tmp.b-UUb.mnt/rootpool
 /rootpool/ROOT  21G18K21G 1% 
 /.alt.tmp.b-UUb.mnt/rootpool/ROOT
 swap   8.3G 0K   8.3G 0%
 /.alt.tmp.b-UUb.mnt/var/run
 swap   8.3G 0K   8.3G 0%/.alt.tmp.b-UUb.mnt/tmp
 [10:12:00] [EMAIL PROTECTED]: /root 
 
 
 so I have /, /usr, /var and /export/home on that primary disk. Original 
 disk is 140gb, this new one is only 36gb, but disk utilization on that 
 primary disk is much less utilized so easily should fit on it.
 
 / 7.2GB
 /usr 8.7GB
 /var 2.5GB
 /export/home 1.2GB
 total space 19.6GB
 I did notice that lucreate did alocate 8GB to SWAP and 4GB to DUMP
 total space needed 31.6GB
 seems like total available disk space on my disk should be 33.92GB
 so its quite close as both numbers do approach. So to make sure I will 
 change disk for 72gb and will try again. I do not beleive that I need to 
 match my main disk size as 146gb as I am not using that much disk space 
 on it. But let me try this and it might be why I am getting this problem...
 
 
 
 On Wed, 5 Nov 2008, Enda O'Connor wrote:
 
 Hi Krzys
 Also some info on the actual system
 ie what was it upgraded to u6 from and how.
 and an idea of how the filesystems are laid out, ie is usr seperate 
 from / and so on ( maybe a df -k ). Don't appear to have any zones 
 installed, just to confirm.
 Enda

 On 11/05/08 14:07, Enda O'Connor wrote:
 Hi
 did you get a core dump?
 would be nice to see the core file to get an idea of what dumped core,
 might configure coreadm if not already done
 run coreadm first, if the output looks like

 # coreadm
  global core file pattern: /var/crash/core.%f.%p
  global core file content: default
init core file pattern: core
init core file content: default
 global core dumps: enabled
per-process core dumps: enabled
   global setid core dumps: enabled
  per-process setid core dumps: disabled
  global core dump logging: enabled

 then all should be good, and cores should appear in /var/crash

 otherwise the following should configure coreadm:
 coreadm -g /var/crash/core.%f.%p
 coreadm -G all
 coreadm -e global
 coreadm -e per-process


 coreadm -u to load the new settings without rebooting.

 also might need to set the size of the core dump via
 ulimit -c unlimited
 check ulimit -a first.

 then rerun test and check /var/crash for core dump.

 If that fails a truss via say truss -fae -o /tmp/truss.out lucreate 
 -c ufsBE -n zfsBE -p rootpool

 might give an indication, look for SIGBUS in the truss log

 NOTE, that you might want to reset the coreadm and ulimit for 
 coredumps after this, in order to not risk filling the system with 
 coredumps in the case of some utility coredumping in a loop say.


 Enda
 On 11/05/08 13:46, Krzys wrote:

 On Wed, 5 Nov 2008, Enda O'Connor wrote:

 On 11/05/08 13:02, Krzys wrote:
 I am not sure what I did wrong but I did follow up all the steps 
 to get my system moved from ufs to zfs and not I am unable to boot 
 it... can anyone suggest what I could do to fix it?

 here are all my steps:

 [00:26:38] @adas: /root  zpool create rootpool c1t1d0s0
 [00:26:57] @adas: /root  lucreate -c ufsBE -n zfsBE -p rootpool
 Analyzing system configuration.
 Comparing source boot environment ufsBE file systems with the file
 system(s) you specified for the new boot environment. Determining 
 which
 file systems should 

[zfs-discuss] Oracle on ZFS

2008-11-05 Thread Mario Williams
My co-workers and I are trying to find out if anyone out there is
running production Oracle on.  We currently have a project in which we
have a SAN divided into 51 86.9GB luns (performance).  Oracle has been
setup on 18 or so partitions and using SVM would be a logistical
nightmare as far as management goes. 

 

Has anyone had any good dealings with Oracle on ZFS and, if so, what
issues did you run into.

 

Cheers,

 

Mario Williams

Sr. Unix/Linux Systems Admin

DigitalGlobe IS Projects Group

w - 303.684.4501

m - 303.834.2870

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Roch star cool stuff

2008-11-05 Thread Richard Elling
Our very own Roch (Bourbonnais) star is in a new video released
today as part of the MySQL releases today.
http://www.sun.com/servers/index.jsp?intcmp=hp2008nov05_mysql_find
In the video A Look Inside Sun's MySQL Optimization Lab
Roch gives a little bit of a tour and at around 3:00, you get a glimpse
of some Really Cool Stuff which might be of interest to ZFS folks :-)
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] FYI - proposing storage pm project

2008-11-05 Thread Richard Elling
Nathan Kroenert wrote:
 Not wanting to hijack this thread, but...

 I'm a simple man with simple needs. I'd like to be able to manually spin 
 down my disks whenever I want to...

 Anyone come up with a way to do this? ;)
   
For those disks that support it,
luxadm stop /dev/rdsk/...
has worked for the past 10 years or so.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS on Fit-PC Slim?

2008-11-05 Thread Vincent Fox
Anyone tried Nevada with ZFS on small platforms?

 I ordered one of these:

http://www.fit-pc.com/new/fit-pc-slim-specifications.html

Planning to stick in a 160-gig Samsung drive and use it for lightweight 
household server.  Probably some Samba usage, and a tiny bit of Apache  
RADIUS.   I don't need it to be super-fast, but slow as watching paint dry 
won't work either.   Just curious if anyone else has tried something similar 
everything I read says ZFS wants 1-gig RAM but don't say what size of penalty I 
would pay for having less.  I could run Linux on it of course but now prefer to 
remain free of the tyranny of fsck.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Oracle on ZFS

2008-11-05 Thread Robert Milkowski




Hello Mario,

Wednesday, November 5, 2008, 7:25:06 PM, you wrote:







My co-workers and I are trying to find out if anyone out there is running production Oracle on. We currently have a project in which we have a SAN divided into 51 86.9GB luns (performance). Oracle has been setup on 18 or so partitions and using SVM would be a logistical nightmare as far as management goes.

Has anyone had any good dealings with Oracle on ZFS and, if so, what issues did you run into.







I have deployed several Oracle databases on ZFS.
You need to do basic tuning, like matching ZFS's recordsize to db_block size in Oracle, disable cache flushes on zfs or configure a disk array to ignore them,
disable atime updates, limit ARC size, disable vdev prefetch, etc. Basically I tend to use standard set of tunables for Oracle on ZFS. You can find most of it in ZFS Evil Tuning Guide.

It is useful to create a pool with several datasets for: archivelogs, redologs, indexes, tablespaces, binaries.
The benefit is that you can set-up different recordsize for datasets containing tablespaces and indexes.

The other useful thing is to regularly do hotbackups by utilizing zfs snapshots and keep last one or last couple of them. Usually it's very quick to create a hotbackup that way (2-10s) and depending on the environment you could do them on hourly basis for example. This is of course additional to any other backup method like rman.

When I consolidate several Oracle databases on one box or a cluster I tend to dedicate a separate pool to each Oracle instance - that way not only you keep them more separated so they have less means to impact each other, it's easier to diagnose any problems but also you have an option to independently fail over each instance in a cluster environment.

Disk/storage management with zfs is a pure joy :)


I haven't deployed a production Oracle utilizing ZFS built-in compression (yet) but I did it on test/dev platforms. Depending on your environment it could ba beneficial to do so. I deployed other products like LDAP, MySQL, etc. on ZFS using built-in compression to actually get better performance.


ZFS is also very helpful for dev environments when you have several Oracle (and other applications) each in its own Solaris zone, where each zone is on a separate zfs dataset (with some sub-datasets). You can put all of them on one zfs pool and then use quota, reservation to manage storage allocation - really helpful.




--
Best regards,
Robert Milkowski  mailto:[EMAIL PROTECTED]
   http://milek.blogspot.com



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Roch star cool stuff

2008-11-05 Thread Robert Milkowski
Hello Richard,

Wednesday, November 5, 2008, 7:29:35 PM, you wrote:

RE Our very own Roch (Bourbonnais) star is in a new video released
RE today as part of the MySQL releases today.
RE http://www.sun.com/servers/index.jsp?intcmp=hp2008nov05_mysql_find
RE In the video A Look Inside Sun's MySQL Optimization Lab
RE Roch gives a little bit of a tour and at around 3:00, you get a glimpse
RE of some Really Cool Stuff which might be of interest to ZFS folks :-)

That is similar to something... :P

Is it going to be open sourced (MySQL appliance)?

-- 
Best regards,
 Robert Milkowskimailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool import hangs

2008-11-05 Thread Victor Latushkin
Hi Jens,

Jens Hamisch wrote:
 Hi Erik,
 hi Victor,
 
 
 I have exactly the same problem as you described in your thread.

Exactly same problem would mean that only config object in the pool is 
corrupted. Are you 100% sure that you have exact same problem?

 Could you please explain to me what to do to recover the data
 on the pool?

If answer is yes, then all you need to do is import your pool with build 
99 or later and then export it. This will fix config object.

Cheers,
Victor
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on Fit-PC Slim?

2008-11-05 Thread Al Hopper
On Wed, Nov 5, 2008 at 3:55 PM, Vincent Fox [EMAIL PROTECTED] wrote:
 Anyone tried Nevada with ZFS on small platforms?

Yes - on the least powerful system I would even think of trying ZFS on
- an Intel D945GCLF2 (dual-core  1.6GHz Atom 330) with 2Gb of RAM and
2 SATA ports.  It came up in 32-bit mode - now I know a later version
of OpenSolaris will run in 64-bit mode on this platform (the
motherboard with CPU costs about $85 - you just install an
in-expensive 2Gb SIMM and you're ready to roll.  It was a little
slower than I would like - felt like I was going backwards in time to
the earlier P4 based systems.

  I ordered one of these:

 http://www.fit-pc.com/new/fit-pc-slim-specifications.html

 Planning to stick in a 160-gig Samsung drive and use it for lightweight 
 household server.  Probably some Samba usage, and a tiny bit of Apache  
 RADIUS.   I don't need it to be super-fast, but slow as watching paint dry 
 won't

You know that you need a minimum of 2 disks to form a (mirrored) pool
with ZFS?  A pool with no redundancy is not a good idea!

 work either.   Just curious if anyone else has tried something similar 
 everything I  read says ZFS wants 1-gig RAM but don't say what size of 
 penalty I would pay
 for having less.  I could run Linux on it of course but now prefer to remain 
 free of  the tyranny of fsck.

I  don't think that there is enough CPU horse-power on this platform
to run OpenSolaris - and you need approx 768Kb (3/4 of a Gb) of RAM
just to install it.  After that OpenSolaris will only increase in size
over time   To try to run it as a ZFS server would be madness -
worse than watching paint dry.

There are some RAID controllers on the market with more horsepower
than you've got in this system  OTOH - this platform would make an
excellent firewall if you load MonoWall on it!

Regards,

-- 
Al Hopper  Logical Approach Inc,Plano,TX [EMAIL PROTECTED]
   Voice: 972.379.2133 Timezone: US CDT
OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] migrating ufs to zfs - cant boot system

2008-11-05 Thread Krzys
THis is so bizare, I am unable to pass this problem. I though I had not enough 
space on my hard drive (new one) so I replaced it with 72gb drive, but still 
getting that bus error. Originally when I restarted my server it did not want 
to 
boot, do I had to power it off and then back on and it then booted up. But 
constantly I am getting this Bus Error - core dumped

anyway in my /var/crash I see hundreds of core.void files and 3 core.cpio 
files. 
I would imagine core.cpio are the ones that are direct result of what I am 
probably eperiencing.

-rw---   1 root root 4126301 Nov  5 19:22 core.vold.24854
-rw---   1 root root 4126301 Nov  5 19:22 core.vold.24867
-rw---   1 root root 4126301 Nov  5 19:22 core.vold.24880
-rw---   1 root root 4126301 Nov  5 19:22 core.vold.24893
-rw---   1 root root 4126301 Nov  5 19:22 core.vold.24906
-rw---   1 root root 4126301 Nov  5 19:22 core.vold.24919
-rw---   1 root root 4126301 Nov  5 19:22 core.vold.24932
-rw---   1 root root 4126301 Nov  5 19:22 core.vold.24950
-rw---   1 root root 4126301 Nov  5 19:22 core.vold.24978
drwxr-xr-x   3 root root   81408 Nov  5 20:06 .
-rw---   1 root root 31351099 Nov  5 20:06 core.cpio.6208



On Wed, 5 Nov 2008, Enda O'Connor wrote:

 Hi
 Looks ok, some mounts left over from pervious fail.
 In regards to swap and dump on zpool you can set them
 zfs set volsize=1G rootpool/dump
 zfs set volsize=1G rootpool/swap

 for instance, of course above are only an example of how to do it.
 or make the zvol doe rootpool/dump etc before lucreate, in which case it will 
 take the swap and dump size you have preset.

 But I think we need to see the coredump/truss at this point to get an idea of 
 where things went wrong.
 Enda

 On 11/05/08 15:38, Krzys wrote:
 I did upgrade my U5 to U6 from DVD, went trough the upgrade process.
 my file system is setup as follow:
 [10:11:54] [EMAIL PROTECTED]: /root  df -h | egrep -v 
 platform|sharefs|objfs|mnttab|proc|ctfs|devices|fd|nsr
 Filesystem size   used  avail capacity  Mounted on
 /dev/dsk/c1t0d0s0   16G   7.2G   8.4G47%/
 swap   8.3G   1.5M   8.3G 1%/etc/svc/volatile
 /dev/dsk/c1t0d0s6   16G   8.7G   6.9G56%/usr
 /dev/dsk/c1t0d0s1   16G   2.5G13G17%/var
 swap   8.5G   229M   8.3G 3%/tmp
 swap   8.3G40K   8.3G 1%/var/run
 /dev/dsk/c1t0d0s7   78G   1.2G76G 2%/export/home
 rootpool33G19K21G 1%/rootpool
 rootpool/ROOT   33G18K21G 1%/rootpool/ROOT
 rootpool/ROOT/zfsBE 33G31M21G 1%/.alt.tmp.b-UUb.mnt
 /export/home78G   1.2G76G 2% 
 /.alt.tmp.b-UUb.mnt/export/home
 /rootpool   21G19K21G 1% 
 /.alt.tmp.b-UUb.mnt/rootpool
 /rootpool/ROOT  21G18K21G 1% 
 /.alt.tmp.b-UUb.mnt/rootpool/ROOT
 swap   8.3G 0K   8.3G 0% 
 /.alt.tmp.b-UUb.mnt/var/run
 swap   8.3G 0K   8.3G 0%/.alt.tmp.b-UUb.mnt/tmp
 [10:12:00] [EMAIL PROTECTED]: /root 
 
 
 so I have /, /usr, /var and /export/home on that primary disk. Original 
 disk is 140gb, this new one is only 36gb, but disk utilization on that 
 primary disk is much less utilized so easily should fit on it.
 
 / 7.2GB
 /usr 8.7GB
 /var 2.5GB
 /export/home 1.2GB
 total space 19.6GB
 I did notice that lucreate did alocate 8GB to SWAP and 4GB to DUMP
 total space needed 31.6GB
 seems like total available disk space on my disk should be 33.92GB
 so its quite close as both numbers do approach. So to make sure I will 
 change disk for 72gb and will try again. I do not beleive that I need to 
 match my main disk size as 146gb as I am not using that much disk space on 
 it. But let me try this and it might be why I am getting this problem...
 
 
 
 On Wed, 5 Nov 2008, Enda O'Connor wrote:
 
 Hi Krzys
 Also some info on the actual system
 ie what was it upgraded to u6 from and how.
 and an idea of how the filesystems are laid out, ie is usr seperate from / 
 and so on ( maybe a df -k ). Don't appear to have any zones installed, 
 just to confirm.
 Enda
 
 On 11/05/08 14:07, Enda O'Connor wrote:
 Hi
 did you get a core dump?
 would be nice to see the core file to get an idea of what dumped core,
 might configure coreadm if not already done
 run coreadm first, if the output looks like
 
 # coreadm
  global core file pattern: /var/crash/core.%f.%p
  global core file content: default
init core file pattern: core
init core file content: default
 global core dumps: enabled
per-process core dumps: enabled
   global setid core dumps: enabled
  per-process setid core dumps: disabled
  global core dump logging: enabled
 
 then all should be good, and cores should appear in /var/crash
 
 

Re: [zfs-discuss] migrating ufs to zfs - cant boot system

2008-11-05 Thread Krzys
what makes me wonder is why I am not even able to see anything under boot -L ? 
and it is just not seeing this disk as a boot device? so strange.

On Wed, 5 Nov 2008, Krzys wrote:

 THis is so bizare, I am unable to pass this problem. I though I had not enough
 space on my hard drive (new one) so I replaced it with 72gb drive, but still
 getting that bus error. Originally when I restarted my server it did not want 
 to
 boot, do I had to power it off and then back on and it then booted up. But
 constantly I am getting this Bus Error - core dumped

 anyway in my /var/crash I see hundreds of core.void files and 3 core.cpio 
 files.
 I would imagine core.cpio are the ones that are direct result of what I am
 probably eperiencing.

 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24854
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24867
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24880
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24893
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24906
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24919
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24932
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24950
 -rw---   1 root root 4126301 Nov  5 19:22 core.vold.24978
 drwxr-xr-x   3 root root   81408 Nov  5 20:06 .
 -rw---   1 root root 31351099 Nov  5 20:06 core.cpio.6208



 On Wed, 5 Nov 2008, Enda O'Connor wrote:

 Hi
 Looks ok, some mounts left over from pervious fail.
 In regards to swap and dump on zpool you can set them
 zfs set volsize=1G rootpool/dump
 zfs set volsize=1G rootpool/swap

 for instance, of course above are only an example of how to do it.
 or make the zvol doe rootpool/dump etc before lucreate, in which case it will
 take the swap and dump size you have preset.

 But I think we need to see the coredump/truss at this point to get an idea of
 where things went wrong.
 Enda

 On 11/05/08 15:38, Krzys wrote:
 I did upgrade my U5 to U6 from DVD, went trough the upgrade process.
 my file system is setup as follow:
 [10:11:54] [EMAIL PROTECTED]: /root  df -h | egrep -v
 platform|sharefs|objfs|mnttab|proc|ctfs|devices|fd|nsr
 Filesystem size   used  avail capacity  Mounted on
 /dev/dsk/c1t0d0s0   16G   7.2G   8.4G47%/
 swap   8.3G   1.5M   8.3G 1%/etc/svc/volatile
 /dev/dsk/c1t0d0s6   16G   8.7G   6.9G56%/usr
 /dev/dsk/c1t0d0s1   16G   2.5G13G17%/var
 swap   8.5G   229M   8.3G 3%/tmp
 swap   8.3G40K   8.3G 1%/var/run
 /dev/dsk/c1t0d0s7   78G   1.2G76G 2%/export/home
 rootpool33G19K21G 1%/rootpool
 rootpool/ROOT   33G18K21G 1%/rootpool/ROOT
 rootpool/ROOT/zfsBE 33G31M21G 1%/.alt.tmp.b-UUb.mnt
 /export/home78G   1.2G76G 2%
 /.alt.tmp.b-UUb.mnt/export/home
 /rootpool   21G19K21G 1%
 /.alt.tmp.b-UUb.mnt/rootpool
 /rootpool/ROOT  21G18K21G 1%
 /.alt.tmp.b-UUb.mnt/rootpool/ROOT
 swap   8.3G 0K   8.3G 0%
 /.alt.tmp.b-UUb.mnt/var/run
 swap   8.3G 0K   8.3G 0%/.alt.tmp.b-UUb.mnt/tmp
 [10:12:00] [EMAIL PROTECTED]: /root 


 so I have /, /usr, /var and /export/home on that primary disk. Original
 disk is 140gb, this new one is only 36gb, but disk utilization on that
 primary disk is much less utilized so easily should fit on it.

 / 7.2GB
 /usr 8.7GB
 /var 2.5GB
 /export/home 1.2GB
 total space 19.6GB
 I did notice that lucreate did alocate 8GB to SWAP and 4GB to DUMP
 total space needed 31.6GB
 seems like total available disk space on my disk should be 33.92GB
 so its quite close as both numbers do approach. So to make sure I will
 change disk for 72gb and will try again. I do not beleive that I need to
 match my main disk size as 146gb as I am not using that much disk space on
 it. But let me try this and it might be why I am getting this problem...



 On Wed, 5 Nov 2008, Enda O'Connor wrote:

 Hi Krzys
 Also some info on the actual system
 ie what was it upgraded to u6 from and how.
 and an idea of how the filesystems are laid out, ie is usr seperate from /
 and so on ( maybe a df -k ). Don't appear to have any zones installed,
 just to confirm.
 Enda

 On 11/05/08 14:07, Enda O'Connor wrote:
 Hi
 did you get a core dump?
 would be nice to see the core file to get an idea of what dumped core,
 might configure coreadm if not already done
 run coreadm first, if the output looks like

 # coreadm
  global core file pattern: /var/crash/core.%f.%p
  global core file content: default
init core file pattern: core
init core file content: default
 global core dumps: enabled
per-process core dumps: enabled
   global setid 

[zfs-discuss] zfs free space

2008-11-05 Thread none
Hi, I'm trying to get a status from zfs on where the free space in my zfs 
filesystem is. Its a RAIDZ2 pool on 4 x 320GB HDD. I have several snapshots and 
I've just deleted rougly 150GB worth of data I didn't need from the current 
filesystem. The non-snapshot data now only takes up 156GB but I can't see where 
the rest is in the snapshots.

The USED for the main filesystem (storage/freddofrog) shows 357G and I would 
expect the deleted 150G or so of data to show up in the snapshots below, but it 
doesn't. A du -c -h on /storage/freddofrog shows 152G used, about the same as 
the REFER for storage/freddofrog below

So how can I tell where the space is being used (which snapshots)?

[EMAIL PROTECTED]:~$ zfs list
NAME USED  AVAIL  REFER
storage  357G   227G  28.4K
[EMAIL PROTECTED] 0  -  28.4K  
[EMAIL PROTECTED] 0  -  28.4K  
storage/freddofrog   357G   227G   151G
storage/[EMAIL PROTECTED]   4.26G  -   187G  
storage/[EMAIL PROTECTED] 61.1M  -   206G  
storage/[EMAIL PROTECTED]  773M  -   201G  
storage/[EMAIL PROTECTED] 33.2M  -   192G  
storage/[EMAIL PROTECTED]   62.6M  -   212G  
storage/[EMAIL PROTECTED] 5.29G  -   217G
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on Fit-PC Slim?

2008-11-05 Thread Vincent Fox
 You know that you need a minimum of 2 disks to form a
 (mirrored) pool
 with ZFS?  A pool with no redundancy is not a good
 idea!

According to the slides I have seen, a ZFS filesystem even on a single disk can 
handle massive amounts of sector failure before it becomes unusable.   I seem 
to recall it said 1/8th of the disk?  So even on a single disk the redundancy 
in the metadata is valuable.  And if I don't have really very much data I can 
set copies=2 so I have better protection for the data as well.

My goal is a compact low-powered and low-maintenance widget.  Eliminating the 
chance of fsck is always a good thing now that I have tasted ZFS.

I'm going to try and see if Nevada will even install when it arrives, and 
report back.  Perhaps BSD is another option.  If not I will fall back to Ubuntu.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on Fit-PC Slim?

2008-11-05 Thread Bob Friesenhahn
On Wed, 5 Nov 2008, Vincent Fox wrote:

 According to the slides I have seen, a ZFS filesystem even on a 
 single disk can handle massive amounts of sector failure before it 
 becomes unusable.

When a tiny volcanic jet of molten plastic shoots out of the chip on 
the disk drive (followed by a bit of smoke and foul odor), the entire 
disk becomes unusable.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss