[zfs-discuss] Trouble mirroring root pool onto larger disk

2011-07-01 Thread Jiawen Chen
Hi,

I have Solaris 11 Express with a root pool installed on a 500 GB disk.  I'd 
like to migrate it to a 2 TB disk.  I've followed the instructions on the ZFS 
troubleshooting guide 
(http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Replacing.2FRelabeling_the_Root_Pool_Disk)
 and the Oracle ZFS Administration Guide 
(http://download.oracle.com/docs/cd/E19253-01/819-5461/ghzvx/index.html) pretty 
carefully.  However, things still don't work: after re-silvering, I switch my 
BIOS to boot from the 2 TB disk and at boot, *some* kind of error message 
appears for  1 second before the machine reboots itself.  Is there any way I 
can view this message?  I.e., is this message written to the log anywhere?

As far as I can tell, I've set up all the partitions and slices correctly 
(VTOC below).  The only error message I get is when I do:

# zpool attach rpool c9t0d0s0 c13d1s0

(c9t0d0s0 is the 500 GB original disk, c13d1s0 is the 2 TB new disk)

I get:

invalid vdev specification
use '-f' to override the following errors:
_dev_dsk_c13d1s0 overlaps with _dev_dsk_c13d1s2

But that's a well known bug and I use -f to force it since the backup slice 
shouldn't matter.  If anyone has any ideas, I really appreciate it.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Trouble mirroring root pool onto larger disk

2011-07-01 Thread Jiawen Chen
Here's my disk layout
=

500 GB disk

fdisk

 Total disk size is 60801 cylinders
 Cylinder size is 16065 (512 byte) blocks

   Cylinders
  Partition   StatusType  Start   End   Length%
  =   ==  =   ===   ==   ===
  1   ActiveSolaris2  1  6080060800100

VTOC:

partition p
Current partition table (original):
Total disk cylinders available: 60798 + 2 (reserved cylinders)

Part  TagFlag Cylinders SizeBlocks
  0   rootwm   1 - 60797  465.73GB(60797/0/0) 976703805
  1 unassignedwm   00 (0/0/0) 0
  2 backupwu   0 - 60797  465.74GB(60798/0/0) 976719870
  3 unassignedwm   00 (0/0/0) 0
  4 unassignedwm   00 (0/0/0) 0
  5 unassignedwm   00 (0/0/0) 0
  6 unassignedwm   00 (0/0/0) 0
  7 unassignedwm   00 (0/0/0) 0
  8   bootwu   0 - 07.84MB(1/0/0) 16065
  9 unassignedwm   00 (0/0/0) 0

=

2 TB disk:

fdisk:


 Total disk size is 60799 cylinders
 Cylinder size is 64260 (512 byte) blocks

   Cylinders
  Partition   StatusType  Start   End   Length%
  =   ==  =   ===   ==   ===
  1   ActiveSolaris2  1  6079860798100

VTOC:

partition p
Current partition table (original):
Total disk cylinders available: 60796 + 2 (reserved cylinders)

Part  TagFlag Cylinders SizeBlocks
  0   rootwm   1 - 607951.82TB(60795/0/0) 3906686700
  1 unassignedwm   00 (0/0/0)  0
  2 backupwu   0 - 607951.82TB(60796/0/0) 3906750960
  3 unassignedwm   00 (0/0/0)  0
  4 unassignedwm   00 (0/0/0)  0
  5 unassignedwm   00 (0/0/0)  0
  6 unassignedwm   00 (0/0/0)  0
  7 unassignedwm   00 (0/0/0)  0
  8   bootwu   0 - 0   31.38MB(1/0/0)  64260
  9 unassignedwm   00 (0/0/0)  0

=
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cross platform (freebsd) zfs pool replication

2011-07-01 Thread Andriy Gapon
on 01/07/2011 00:12 Joeri Vanthienen said the following:
 Hi,
 
 I have two servers running: freebsd with a zpool v28 and a nexenta 
 (opensolaris b134) running zpool v26.
 
 Replication (with zfs send/receive) from the nexenta box to the freebsd works 
 fine, but I have a problem accessing my replicated volume. When I'm typing 
 and autocomplete with tab key the command cd /remotepool/us (for 
 /remotepool/users) I get a panic.
 
 check the panic @ http://www.boeri.be/panic.jpg

Since this is a FreeBSD panic, I suggest that you try getting help on FreeBSD
mailing lists, f...@freebsd.org looks like the best choice.
BTW, your report doesn't contain your actual panic message and that could be
important.


 - autocomplete tab does a ls command in the background, I think
 
 I think there is a problem with NFSv4 acl/id mapping. Normal zfs (inititally 
 created on the FreeBSD box) file systems are working fine. 
 
 The nexenta box is Active Directory integrated with and the mappings for the 
 users on this cifs share have been created on the fly (Ephemeral ID Mapping). 
 
 Any solution for this? I really need the ACL permissions to be replicated. So 
 rsync is not a solution. Please help :) 
 
 root@ ~]# zfs get all remotepool/users
 NAME  PROPERTY  VALUESOURCE
 remotepool/users  type  filesystem   -
 remotepool/users  creation  Wed Jun 29 14:42 2011-
 remotepool/users  used  9.06G-
 remotepool/users  available 187G -
 remotepool/users  referenced9.06G-
 remotepool/users  compressratio 1.00x-
 remotepool/users  mounted   yes  -
 remotepool/users  quota none default
 remotepool/users  reservation   none default
 remotepool/users  recordsize128K default
 remotepool/users  mountpoint/remotepool/usersdefault
 remotepool/users  sharenfs  off  default
 remotepool/users  checksum  on   default
 remotepool/users  compression   off  default
 remotepool/users  atime on   default
 remotepool/users  devices   on   default
 remotepool/users  exec  on   default
 remotepool/users  setuidon   default
 remotepool/users  readonly  off  default
 remotepool/users  jailedoff  default
 remotepool/users  snapdir   hidden   received
 remotepool/users  aclinheritpassthrough  received
 remotepool/users  canmount  on   default
 remotepool/users  xattr off  temporary
 remotepool/users  copies1default
 remotepool/users  version   5-
 remotepool/users  utf8only  off  -
 remotepool/users  normalization none -
 remotepool/users  casesensitivity   insensitive  -
 remotepool/users  vscan off  default
 remotepool/users  nbmandon   received
 remotepool/users  sharesmb  name=users,guestok=true  received
 remotepool/users  refquota  none default
 remotepool/users  refreservationnone default
 remotepool/users  primarycache  all  default
 remotepool/users  secondarycacheall  default
 remotepool/users  usedbysnapshots   0-
 remotepool/users  usedbydataset 9.06G-
 remotepool/users  usedbychildren0-
 remotepool/users  usedbyrefreservation  0-
 remotepool/users  logbias   latency  default
 remotepool/users  dedup off  default
 remotepool/users  mlslabel   -
 remotepool/users  sync  standard default


-- 
Andriy Gapon
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Trouble mirroring root pool onto larger disk

2011-07-01 Thread Cindy Swearingen

Hi Jiawen,

Yes, the boot failure message would be very helpful.

The first thing to rule out is:

I think you need to be running a 64-bit kernel to
boot from a 2 TB disk.

Thanks,

Cindy

On 07/01/11 02:58, Jiawen Chen wrote:

Hi,

I have Solaris 11 Express with a root pool installed on a 500 GB disk.  I'd 
like to migrate it to a 2 TB disk.  I've followed the instructions on the ZFS 
troubleshooting guide 
(http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Replacing.2FRelabeling_the_Root_Pool_Disk)
 and the Oracle ZFS Administration Guide 
(http://download.oracle.com/docs/cd/E19253-01/819-5461/ghzvx/index.html) pretty 
carefully.  However, things still don't work: after re-silvering, I switch my BIOS 
to boot from the 2 TB disk and at boot, *some* kind of error message appears for 
 1 second before the machine reboots itself.  Is there any way I can view this 
message?  I.e., is this message written to the log anywhere?

As far as I can tell, I've set up all the partitions and slices correctly 
(VTOC below).  The only error message I get is when I do:

# zpool attach rpool c9t0d0s0 c13d1s0

(c9t0d0s0 is the 500 GB original disk, c13d1s0 is the 2 TB new disk)

I get:

invalid vdev specification
use '-f' to override the following errors:
_dev_dsk_c13d1s0 overlaps with _dev_dsk_c13d1s2

But that's a well known bug and I use -f to force it since the backup slice 
shouldn't matter.  If anyone has any ideas, I really appreciate it.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Trouble mirroring root pool onto larger disk

2011-07-01 Thread Jim Klimov

Do you get to the GRUB menu while booting from the larger drive
(then you can try verbose and/or mdb boots to Solaris and catch
its panic errors), or does the machine reboot before even getting
to GRUB?

Couple of silly questions:

1) Did you installgrub onto the second drive?

2) Are you certain your BIOS supports booting from a 2Tb device
and MBR partition? Or with that size it should be a GPT partition,
probably, which should emulate an empty(?) MBR table...

So try to see if that text involves something about a missing boot
device or an empty hard-disk, etc.

Enabling your BIOS to pause on boot errors (known by the funny
message Keyboard not found, press F1 to continue booting)
can help to read the error. Otherwise you can try to quickly press
the PAUSE key to pause the BIOS output and SPACE to go on
booting. Alternate both to try and catch teh error string...


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 700GB gone?

2011-07-01 Thread Roy Sigurd Karlsbakk
 On Thu, Jun 30, 2011 at 11:40:53PM +0100, Andrew Gabriel wrote:
   On 06/30/11 08:50 PM, Orvar Korvar wrote:
  I have a 1.5TB disk that has several partitions. One of them is
  900GB. Now I can only see 300GB. Where is the rest? Is there a
  command I can do to reach the rest of the data? Will scrub help?
 
  Not much to go on - no one can answer this.
 
  How did you go about partitioning the disk?
  What does the fdisk partitioning look like (if its x86)?
  What does the VToC slice layout look like?
  What are you using each partition and slice for?
  What tells you that you can only see 300GB?
 
 Are you using 32-bit or 64-bit solaris?

IIRC Solaris x86-32 can't even address 1,5TB, so it shouldn't show up at all, 
or as a 1TB drive. If he sees 300GB, there might be some slight overhead and 
the fact that 1TB as reported by drive producers equals 0.9TiB as reported by 
the OS (1TB = 10^12 bytes, 1TiB = 2^40 bytes). The 1.5TB drive will be shown as 
a 1.35 drive or so.

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send not working when i/o errors in pool

2011-07-01 Thread Tuomas Leikola
Rsync with some ignore-errors option, maybe? In any case you've lost some
data so make sure to take record of zpool status -v
On Jul 1, 2011 12:26 AM, Tom Demo tom.d...@lizard.co.nz wrote:
 Hi there.

 I am trying to get my filesystems off a pool that suffered irreparable
damage due to 2 disks partially failing in a 5 disk raidz.

 One of the filesystems has an io error when trying to read one of the
files off it.

 This filesystem cannot be sent - zfs send stops with this error:

 warning: cannot send 'pent@wdFailuresAndSol11Migrate': I/O error

 I have tried using zfs set checksum=off but that doesn't change
anything.

 Any tips how I can get these filesystems over to the new machine please ?

 Thanks,

 Tom.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] about write balancing

2011-07-01 Thread Tuomas Leikola
Sorry everyone, this one was indeed a case of root stupidity. I had
forgotten to upgrade to OI 148, which apparently fixed the write balancer.
Duh. (didn't find full changelog from google tho.)
On Jun 30, 2011 3:12 PM, Tuomas Leikola tuomas.leik...@gmail.com wrote:
 Thanks for the input. This was not a case of degraded vdev, but only a
 missing log device (which i cannot get rid of..). I'll try offlining some
 vdevs and see what happens - altough this should be automatic atf all
times
 IMO.
 On Jun 30, 2011 1:25 PM, Markus Kovero markus.kov...@nebula.fi wrote:


 To me it seems that writes are not directed properly to the devices that
 have most free space - almost exactly the opposite. The writes seem to go
to
 the devices that have _least_ free space, instead of the devices that have
 most free space. The same effect that can be seen in these 60s averages
can
 also be observed in a shorter timespan, like a second or so.

 Is there something obvious I'm missing?


 Not sure how OI should behave, I've managed to even writes  space usage
 between vdevs by bringing device offline in vdev you don't want to writes
 end up to.
 If you have degraded vdev in your pool, zfs will try not to write there,
 and this may be the case here as well as I don't see zpool status output.

 Yours
 Markus Kovero

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 700GB gone?

2011-07-01 Thread Orvar Korvar
I am using 64bit S11E. Everything worked fine earlier. But now I suspect the 
disk is breaking down, it behaves weird. I have several partitions:
1) OpenSolaris b134 upgraded to S11E
2) WinXP
3) FAT32
4) ZFS storage pool of 900GB

Earlier, everything was fine. But suddenly OpenSolaris does not work anymore. 
When I boot via GRUB, it gives an error message and halts.

I have installed S11E on a new disk, and when I try to import the old rpool 1) 
then the computer reboots:
http://opensolaris.org/jive/thread.jspa?messageID=517089#517089

So there are some problems. My storage pool 4) is suddenly 300GB big and where 
is the data? I have one snapshot, or so. I have not deleted stuff.

I suspect the disk is breaking down. How can I tell? I am scrubbing the disk 
now, but so far no errors (50% done). Can I examine the disk somehow? Format 
shows all four partitions.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss