Re: [zfs-discuss] ZFS layout recommendations

2007-11-27 Thread Tommy McNeely
The question is, if you *temporarily* migrate your zones to UFS to install the 
big bad S10u4 patch, and migrate back to ZFS afterwards, will patches work 
after that? A better way to say that is, have we resolved this patch problem 
with zoneroot on zfs for S10u4?

Tommy
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] cannot import 'rpool': one or more devices is currently unavailable

2009-10-22 Thread Tommy McNeely
I have a system who's rpool has gone defunct. The rpool is made of a 
single disk which is a raid5EE made of all 8 146G disks on the box. 
The raid card is the Adaptec brand card.  It was running nv_107, but its 
currently net booted to nv_121. I have already checked in the raid card 
bios, and it says the volume is optimal . We had a power outage in 
BRM07 on Tuesday, and the system appeared to boot back up, but then went 
wonky. I power cycled it, and it came back to a grub prompt cause it 
couldn't read the filesystem.


# uname -a
SunOS  5.11 snv_121 i86pc i386 i86pc

# zpool import
 pool: rpool
   id: 7197437773913332097
state: ONLINE
status: The pool was last accessed by another system.
action: The pool can be imported using its name or numeric identifier and
   the '-f' flag.
  see: http://www.sun.com/msg/ZFS-8000-EY
config:

   rpool   ONLINE
 c0t0d0s0  ONLINE
# zpool import -f 7197437773913332097
cannot import 'rpool': one or more devices is currently unavailable
#

# zpool import -a -f -R /a
cannot import 'rpool': one or more devices is currently unavailable
# zdb -l /dev/dsk/c0t0d0s0

LABEL 0

   version=14
   name='rpool'
   state=0
   txg=742622
   pool_guid=7197437773913332097
   hostid=4930069
   hostname=''
   top_guid=5620634672424557591
   guid=5620634672424557591
   vdev_tree
   type='disk'
   id=0
   guid=5620634672424557591
   path='/dev/dsk/c0t0d0s0'
   devid='id1,s...@tsun_stk_raid_intefd1dfe0/a'
   phys_path='/p...@0,0/pci8086,3...@4/pci108e,2...@0/d...@0,0:a'
   whole_disk=0
   metaslab_array=24
   metaslab_shift=33
   ashift=9
   asize=880083730432
   is_log=0

LABEL 1

   version=14
   name='rpool'
   state=0
   txg=742622
   pool_guid=7197437773913332097
   hostid=4930069
   hostname=''
   top_guid=5620634672424557591
   guid=5620634672424557591
   vdev_tree
   type='disk'
   id=0
   guid=5620634672424557591
   path='/dev/dsk/c0t0d0s0'
   devid='id1,s...@tsun_stk_raid_intefd1dfe0/a'
   phys_path='/p...@0,0/pci8086,3...@4/pci108e,2...@0/d...@0,0:a'
   whole_disk=0
   metaslab_array=24
   metaslab_shift=33
   ashift=9
   asize=880083730432
   is_log=0

LABEL 2

   version=14
   name='rpool'
   state=0
   txg=742622
   pool_guid=7197437773913332097
   hostid=4930069
   hostname=''
   top_guid=5620634672424557591
   guid=5620634672424557591
   vdev_tree
   type='disk'
   id=0
   guid=5620634672424557591
   path='/dev/dsk/c0t0d0s0'
   devid='id1,s...@tsun_stk_raid_intefd1dfe0/a'
   phys_path='/p...@0,0/pci8086,3...@4/pci108e,2...@0/d...@0,0:a'
   whole_disk=0
   metaslab_array=24
   metaslab_shift=33
   ashift=9
   asize=880083730432
   is_log=0

LABEL 3

   version=14
   name='rpool'
   state=0
   txg=742622
   pool_guid=7197437773913332097
   hostid=4930069
   hostname=''
   top_guid=5620634672424557591
   guid=5620634672424557591
   vdev_tree
   type='disk'
   id=0
   guid=5620634672424557591
   path='/dev/dsk/c0t0d0s0'
   devid='id1,s...@tsun_stk_raid_intefd1dfe0/a'
   phys_path='/p...@0,0/pci8086,3...@4/pci108e,2...@0/d...@0,0:a'
   whole_disk=0
   metaslab_array=24
   metaslab_shift=33
   ashift=9
   asize=880083730432
   is_log=0
# zdb -cu -e -d /dev/dsk/c0t0d0s0
zdb: can't open /dev/dsk/c0t0d0s0: No such file or directory
# zdb -e rpool -cu
zdb: can't open rpool: No such device or address
# zdb -e 7197437773913332097
zdb: can't open 7197437773913332097: No such device or address
#

I obviously have no clue how to weild zdb.

Any help you can offer would be appreciated.

Thanks,
Tommy

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] default child filesystem quota

2009-10-26 Thread Tommy McNeely
I may be searching for the wrong thing, but I am trying to figure out a way to 
set the default quota for child file systems. I tried setting the quota on the 
top level, but that is not the desired effect. I'd like to limit, by default, 
newly created filesystems under a certain dataset to 10G (for example). I see 
this as useful for zfs home directories (are we still doing that?), and 
especially for zone roots. I searched around a little, but couldn't find what I 
was looking for. Can anyone lead me in the right direction?

Thanks in advance,
Tommy
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss