Re: [zfs-discuss] replace same sized disk fails with too small error

2009-01-22 Thread Dale Sears
Would this work?  (to get rid of an EFI label).

dd if=/dev/zero of=/dev/dsk/thedisk bs=1024k count=1

Then use

format

format might complain that the disk is not labeled.  You
can then label the disk.

Dale



Antonius wrote:
 can you recommend a walk-through for this process, or a bit more of a 
 description? I'm not quite sure how I'd use that utility to repair the EFI 
 label
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Failure to boot from zfs on Sun v880

2009-01-22 Thread Al Slater
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi.

I am trying to move the root volume from an existing svm mirror to a zfs
root.  The machine is a Sun V880 (SPARC) running nv_96, with OBP version
4.22.34 which is AFAICT the latest.

The svm mirror was constructed as follows

/
d4   m   18GB d14
d14  s   35GB c1t0d0s0
d24  s   35GB c1t1d0s0

swap
d3   m   16GB d13
d13  s   16GB c1t0d0s3
d13  s   16GB c1t1d0s3

/var
d5   m   8.0GB d15
d15  s   16GB c1t0d0s1
d25  s   16GB c1t1d0s1

I removed c1t1d0 from the mirror:
# metadetach d4 d24
# metaclear d24
# metadetach d3 d23
# metaclear d23
# metadetach d5 d25
# metaclear s25

then removed the metadb from c1d1d0s7
# metadb -d c1t1d0s7

Resized s0 on c1t1d0 to include the whole disc and relabelled with an
SMI label.

Created the zfs root pool:
# zpool create rpool c1t1d0s0

Created new BE:

# lucreate -c Sol11_b96 -n Sol11_b96_zfs -p rpool

This ran fine, so I activated the new BE and rebooted

# luactivate Sol11_b96_zfs
# init 6

The system then panicked during the reboot with:

Rebooting with command: boot
Boot device: /p...@8,60/SUNW,q...@2/f...@0,0/d...@w2104cfaf121b,0:a
 File and args:
SunOS Release 5.11 Version snv_96 64-bit
Copyright 1983-2008 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.
Mounting root on rpool/ROOT/Sol11_b105 with filesystem type zfs is not
supported

panic[cpu7]/thread=180e000: vfs_mountroot: cannot remount root

0180b950 genunix:vfs_mountroot+384 (1899fe8, 12bc400, 12bc400,
12bc400, 12bc400, 1873768)
  %l0-3: 01873c68 0600218a4040 01899fe8 01899fe8
  %l4-7: 2420 0420 2000 0600218a4040
0180ba10 genunix:main+bc (1815000, 180c000, 1835bc0, 1815200, 1,
180e000)
  %l0-3: 01836b58 70002000 010c0800 
  %l4-7: 0183ac00 0007 0180c000 01836800

syncing file systems... done
skipping system dump - no dump device configured
rebooting...

It then cycles the reboot and panic until I pull the disc and reboot
continues from the original boot disc.

Any ideas why mounting root on zfs is apparently not supported?

- --
Al Slater

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (SunOS)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkl4QTcACgkQz4fTOFL/EDZr7ACfQEAB3jMDuU/zh2u9UWu0n8f8
7RAAn0TtX94ZIs4nb+ybY00PJCw4a+0e
=Rq9O
-END PGP SIGNATURE-

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Recovery after SAN Corruption

2009-01-22 Thread fredrick phol
ZFS should repair any files it can and mark any it can't as bad from memory.

Restore the files zfs has marked as corrupted from backup?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] JZ spammer

2009-01-22 Thread dick hoogendijk

fredrick phol wrote:
 seconded

Guess I missed something about JZ because I created a sieve rule for him
and never saw something from him again. Is he going to be blocked in some
way?

-- 
Dick Hoogendijk -- PGP/GnuPG key: F86289CE
+http://nagual.nl/ | SunOS 10u6 10/08 ZFS+

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool status -x strangeness

2009-01-22 Thread Ben Miller
The pools are upgraded to version 10.  Also, this is on Solaris 10u6.

# zpool upgrade
This system is currently running ZFS pool version 10.

All pools are formatted using this version.

Ben

 What's the output of 'zfs upgrade' and 'zpool
 upgrade'? (I'm just
 curious - I had a similar situation which seems to be
 resolved now
 that I've gone to Solaris 10u6 or OpenSolaris
 2008.11).
 
 
 
 On Wed, Jan 21, 2009 at 2:11 PM, Ben Miller
 mil...@eecis.udel.edu wrote:
  Bug ID is 6793967.
 
  This problem just happened again.
  % zpool status pool1
   pool: pool1
   state: DEGRADED
   scrub: resilver completed after 0h48m with 0
 errors on Mon Jan  5 12:30:52 2009
  config:
 
 NAME   STATE READ WRITE CKSUM
 pool1  DEGRADED 0 0 0
   raidz2   DEGRADED 0 0 0
 c4t8d0s0   ONLINE   0 0 0
 c4t9d0s0   ONLINE   0 0 0
 c4t10d0s0  ONLINE   0 0 0
 c4t11d0s0  ONLINE   0 0 0
 c4t12d0s0  REMOVED  0 0 0
 c4t13d0s0  ONLINE   0 0 0
 
  errors: No known data errors
 
  % zpool status -x
  all pools are healthy
  %
  # zpool online pool1 c4t12d0s0
  % zpool status -x
   pool: pool1
   state: ONLINE
  status: One or more devices is currently being
 resilvered.  The pool will
 continue to function, possibly in a degraded
 state.
  action: Wait for the resilver to complete.
   scrub: resilver in progress for 0h0m, 0.12% done,
 2h38m to go
  config:
 
 NAME   STATE READ WRITE CKSUM
 pool1  ONLINE   0 0 0
   raidz2   ONLINE   0 0 0
 c4t8d0s0   ONLINE   0 0 0
 c4t9d0s0   ONLINE   0 0 0
 c4t10d0s0  ONLINE   0 0 0
 c4t11d0s0  ONLINE   0 0 0
 c4t12d0s0  ONLINE   0 0 0
 c4t13d0s0  ONLINE   0 0 0
 
  errors: No known data errors
  %
 
  Ben
 
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Failure to boot from zfs on Sun v880

2009-01-22 Thread Mark J Musante
On Thu, 22 Jan 2009, Al Slater wrote:
 Mounting root on rpool/ROOT/Sol11_b105 with filesystem type zfs is not
 supported

This line is coming from svm, which leads me to believe that the zfs 
boot blocks were not properly installed by live upgrade.

You can try doing this by hand, with the command:

installboot -F zfs /usr/platform/`uname -m`/lib/fs/zfs/bootblk 
/dev/rdsk/c1t1d0s0

But if live upgrade was unable to do that already, there may be other 
issues to uncover once you actually do boot.


Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] time slider cleanup errors

2009-01-22 Thread Tom Buskey
I'm running OpenSolaris 10/08 snv_101b with the auto snapshot packages.

I'm getting this error:
 /usr/lib/time-slider-cleanup -y
Traceback (most recent call last):
  File /usr/lib/time-slider-cleanup, line 10, in module
main(abspath(__file__))
  File /usr/lib/../share/time-slider/lib/time_slider/cleanupmanager.py, line 
363, in main
cleanup.send_notification()
  File /usr/lib/../share/time-slider/lib/time_slider/cleanupmanager.py, line 
259, in send_notification
if linedetails[1]:
IndexError: list index out of range


Any way to clear this?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cifs perfomance

2009-01-22 Thread Matt Harrison
Brandon High wrote:
 On Wed, Jan 21, 2009 at 5:40 PM, Bob Friesenhahn
 bfrie...@simple.dallas.tx.us wrote:
 Several people reported this same problem.  They changed their
 ethernet adaptor to an Intel ethernet interface and the performance
 problem went away.  It was not ZFS's fault.
 
 It may not be a ZFS problem, but it is a OpenSolaris problem. The
 drivers for hardware Realtek and other NICs are ... not so great.
 
 -B
 

+1, I was having terrible problems with the onboard RTL nics..but on 
changing to a decent e1000 all is peachy in my world.

Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cifs perfomance

2009-01-22 Thread Toby Thain

On 21-Jan-09, at 9:11 PM, Brandon High wrote:

 On Wed, Jan 21, 2009 at 5:40 PM, Bob Friesenhahn
 bfrie...@simple.dallas.tx.us wrote:
 Several people reported this same problem.  They changed their
 ethernet adaptor to an Intel ethernet interface and the performance
 problem went away.  It was not ZFS's fault.

 It may not be a ZFS problem, but it is a OpenSolaris problem. The
 drivers for hardware Realtek and other NICs are ... not so great.


On the other hand RealTek NICs are not renowned for quality either.  
Aren't they basically the cheapest available?

--Toby



 -B

 -- 
 Brandon High : bh...@freaks.com
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] checksum errors on Sun Fire X4500

2009-01-22 Thread Jay Anderson
I have b105 running on a Sun Fire X4500, and I am constantly seeing checksum 
errors reported by zpool status. The errors are showing up over time on every 
disk in the pool. In normal operation there might be errors on two or three 
disks each day, and sometimes there are enough errors so it reports too many 
errors, and the disk goes into a degraded state. I have had to remove the 
spares from the pool because otherwise the spares get pulled into the pool to 
replace the drives. There are no reported hardware problems with any of the 
drives. I have run scrub multiple times, and this also generates checksum 
errors. After the scrub completes the checksums continue to occur during normal 
operation.

This problem also occurred with b103. Before that Solaris 10u4 was installed on 
the server, and it never had any checksum errors. With the OpenSolaris builds I 
am running CIFS Server, and that's the only difference in server function from 
when Solaris 10u4 was installed on it.

Is this a known issue? Any suggestions or workarounds?

Thank you.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] checksum errors on Sun Fire X4500

2009-01-22 Thread Carsten Aulbert
Hi Jay,

Jay Anderson schrieb:
 I have b105 running on a Sun Fire X4500, and I am constantly seeing checksum 
 errors reported by zpool status. The errors are showing up over time on every 
 disk in the pool. In normal operation there might be errors on two or three 
 disks each day, and sometimes there are enough errors so it reports too many 
 errors, and the disk goes into a degraded state. I have had to remove the 
 spares from the pool because otherwise the spares get pulled into the pool to 
 replace the drives. There are no reported hardware problems with any of the 
 drives. I have run scrub multiple times, and this also generates checksum 
 errors. After the scrub completes the checksums continue to occur during 
 normal operation.
 
 This problem also occurred with b103. Before that Solaris 10u4 was installed 
 on the server, and it never had any checksum errors. With the OpenSolaris 
 builds I am running CIFS Server, and that's the only difference in server 
 function from when Solaris 10u4 was installed on it.
 
 Is this a known issue? Any suggestions or workarounds?

We had something similar two or three disk slots which started to act
weird and failed quite often - usually starting with a high error rate.
After exchanging two hard drives, the Sun hotline initiated to exchange
the backplane - essentially the chassis was replaced.

Since then, we have not encountered anything like this anymore.

So it *might* be the backplane or a broken Marvell controller, but it's
hard to judge.

HTH

Carsten
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ISCI Network Hang - Lun becomes unavailable

2009-01-22 Thread M
Heres what fixed this:

Added

tx_hcksum_enable=0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0;
lso_enable=0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0;

to /kernel/drv/e1000g.conf
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ISCI Network Hang - Lun becomes unavailable

2009-01-22 Thread M
Here what fixed this:

Added

tx_hcksum_enable=0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0;
lso_enable=0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0;

to /kernel/drv/e1000g.conf
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] SSD drives in Sun Fire X4540 or X4500 for dedicated ZIL device

2009-01-22 Thread Greg Mason
We're evaluating the possibility of speeding up NFS operations of our 
X4540s with dedicated log devices. What we are specifically evaluating 
is replacing 1 or two of our spare sata disks with sata SSDs.

Has anybody tried using SSD device(s) as dedicated ZIL devices in a 
X4540? Are there any known technical issues with using a SSD in a X4540?

-Greg
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cifs perfomance

2009-01-22 Thread Nathan Kroenert
Are you able to qualify that a little?

I'm using a realtek interface with OpenSolaris and am yet to experience 
any issues.

Nathan.

Brandon High wrote:
 On Wed, Jan 21, 2009 at 5:40 PM, Bob Friesenhahn
 bfrie...@simple.dallas.tx.us wrote:
 Several people reported this same problem.  They changed their
 ethernet adaptor to an Intel ethernet interface and the performance
 problem went away.  It was not ZFS's fault.
 
 It may not be a ZFS problem, but it is a OpenSolaris problem. The
 drivers for hardware Realtek and other NICs are ... not so great.
 
 -B
 

-- 
//
// Nathan Kroenert  nathan.kroen...@sun.com //
// Systems Engineer Phone:  +61 3 9869-6255 //
// Sun Microsystems Fax:+61 3 9869-6288 //
// Level 7, 476 St. Kilda Road  Mobile: 0419 305 456//
// Melbourne 3004   VictoriaAustralia   //
//
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cifs perfomance

2009-01-22 Thread Brandon High
On Thu, Jan 22, 2009 at 1:29 PM, Nathan Kroenert
nathan.kroen...@sun.com wrote:
 Are you able to qualify that a little?

 I'm using a realtek interface with OpenSolaris and am yet to experience any
 issues.

There's a lot of anecdotal evidence that replacing the rge driver with
the gani driver can fix poor NFS and CIFS performance. Another option
is to use an Intel NIC in place of the Realtek.

Search the archives for gani or slow CIFS and you'll find several
people who resolved poor performance by getting rid of the rge driver.

While it's not hard evidence, it seems to indicate that there are
problems with the driver (and most likely the hardware).

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cifs perfomance

2009-01-22 Thread Nathan Kroenert
Interesting. I'll have a poke...

Thanks!

Nathan.

Brandon High wrote:
 On Thu, Jan 22, 2009 at 1:29 PM, Nathan Kroenert
 nathan.kroen...@sun.com wrote:
 Are you able to qualify that a little?

 I'm using a realtek interface with OpenSolaris and am yet to experience any
 issues.
 
 There's a lot of anecdotal evidence that replacing the rge driver with
 the gani driver can fix poor NFS and CIFS performance. Another option
 is to use an Intel NIC in place of the Realtek.
 
 Search the archives for gani or slow CIFS and you'll find several
 people who resolved poor performance by getting rid of the rge driver.
 
 While it's not hard evidence, it seems to indicate that there are
 problems with the driver (and most likely the hardware).
 
 -B
 

-- 
//
// Nathan Kroenert  nathan.kroen...@sun.com //
// Systems Engineer Phone:  +61 3 9869-6255 //
// Sun Microsystems Fax:+61 3 9869-6288 //
// Level 7, 476 St. Kilda Road  Mobile: 0419 305 456//
// Melbourne 3004   VictoriaAustralia   //
//
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs send -R slow

2009-01-22 Thread BJ Quinn
I'm using OpenSolaris with ZFS as a backup server.  I copy all my data from 
various sources onto the OpenSolaris server daily, and run a snapshot at the 
end of each backup.  Using gzip-1 compression, mount -F smbfs, and the 
--in-place and --no-whole-file switches for rsync, I get efficient space usage, 
only storing the blocks that changed each day.  This way, I have a backup 
server containing all backups for all days going back effectively indefinitely. 
 Works great.

Of course, I also want to have something that can be rotated and/or taken 
offsite.  What I've done is use an internal drive in the backup server to 
actually receive and store all the backups and snapshots themselves.  Then at 
the end of the actual backup I run a snapshot, and then do a zfs send -R of my 
backup pool and all its snapshots to an external drive.  Not being able to 
trust what's on the drive (its contents could possibly have changed since last 
time I used it, and I want every snapshot on every external drive), I wipe the 
external drive clean and then have it receive the full contents of the 
non-incremental zfs send -R backuppool I mentioned above.

This works.  However, it's painfully slow.  I get the impression that zfs is 
de-compressing and then re-compressing the data instead of transferring it in 
its compressed state, and then when the incrementals start copying over (the 
snapshots themselves), it gets drastically slower.  The whole process works, 
but I'm thinking that when I start getting too many snapshots, it won't finish 
overnight and will run into the next day.

I don't want to just copy over the contents of my most recent snapshot on my 
backup server to the external drive then run a snapshot on the external drive, 
because I'd like each external drive to contain ALL the snapshots from the 
internal drive.

Is there any way to speed up a compressed zfs send -R?  Or is there some other 
way to approach this?  Maybe some way to do a bit-level clone of the internal 
drive to the external drive (the internal backup drive is not the same as the 
OS drive, so it could be unmounted), or SNDR replication or something?

Thanks!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] replace same sized disk fails with too small error

2009-01-22 Thread Antonius
yes, that's exactly what I did. the issue is that I can't get the corrected 
label to be written once I've zero'd the drive. I get and error from fdisk that 
apparently views the backup label
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send -R slow

2009-01-22 Thread Bob Friesenhahn
On Thu, 22 Jan 2009, BJ Quinn wrote:

 Is there any way to speed up a compressed zfs send -R?  Or is there 
 some other way to approach this?  Maybe some way to do a bit-level 
 clone of the internal drive to the external drive (the internal 
 backup drive is not the same as the OS drive, so it could be 
 unmounted), or SNDR replication or something?

Maybe you can make your external drive a 'mirror' so that it gets 
resilvered and can be removed.  Of course you may want to have several 
of these drives.

Bob
==
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Bug report: disk replacement confusion

2009-01-22 Thread Scott L. Burson
This is in snv_86.  I have a four-drive raidz pool.  One of the drives died.  I 
replaced it, but wasn't careful to put the new drive on the same controller 
port; one of the existing drives wound up on the port that had previously been 
used by the failed drive, and the new drive wound up on the port previously 
used by that drive.

I powered up and booted, and ZFS started a resilver automatically, but the pool 
status was confused.  It looked like this, even after the resilver completed 
(indentation is being discarded here):

NAMESTATE READ WRITE CKSUM
pool0   DEGRADED 0 0 0
  raidz1DEGRADED 0 0 0
c5t1d0  FAULTED  0 0 0  too many errors
c5t1d0  ONLINE   0 0 0
c5t3d0  ONLINE   0 0 0
c5t0d0  ONLINE   0 0 0

Doing 'zpool clear' just changed the too many errors to corrupted data.

I then tried 'zpool replace pool0 c5t1d0 c5t2d0' to see if that would 
straighten things out (hoping it wouldn't screw things up any further!).  It 
started another resilver, during which the status looked like this:

NAME   STATE READ WRITE CKSUM
pool0  DEGRADED 0 0 0
  raidz1   DEGRADED 0 0 0
replacing  DEGRADED 0 0 0
  c5t1d0   FAULTED  0 0 0  corrupted data
  c5t2d0   ONLINE   0 0 0
c5t1d0 ONLINE   0 0 0
c5t3d0 ONLINE   0 0 0
c5t0d0 ONLINE   0 0 0

Maybe this will work, but -- doesn't ZFS put unique IDs on the drives so it can 
track them in case they wind up on different ports?  If so, seems like it needs 
to back-map that information to the device names when mounting.  Or something :)

-- Scott
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Raidz1 faulted with single bad disk. Requesting assistance.

2009-01-22 Thread Brad Hill
 I would get a new 1.5 TB and make sure it has the new
 firmware and replace 
 c6t3d0 right away - even if someone here comes up
 with a magic solution, you 
 don't want to wait for another drive to fail.

The replacement disk showed up today but I'm unable to replace the one marked 
UNAVAIL:

r...@blitz:~# zpool replace tank c6t3d0
cannot open 'tank': pool is unavailable

 I would in this case also immediately export the pool (to prevent any 
 write attempts) and see about a firmware update for the failed drive 
 (probably need windows for this).

While I didn't export first, I did boot with a livecd and tried to force the 
import with that:

r...@opensolaris:~# zpool import -f tank
internal error: Bad exchange descriptor
Abort (core dumped)

Hopefully someone on this list understands what situation I am in and how to 
resolve it. Again, many thanks in advance for any suggestions you all have to 
offer.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] replace same sized disk fails with too small error

2009-01-22 Thread Jonathan Edwards
not quite .. it's 16KB at the front and 8MB back of the disk (16384  
sectors) for the Solaris EFI - so you need to zero out both of these

of course since these drives are 1TB you i find it's easier to format  
to SMI (vtoc) .. with format -e (choose SMI, label, save, validate -  
then choose EFI)

but to Casper's point - you might want to make sure that fdisk is  
using the whole disk .. you should probably reinitialize the fdisk  
sectors either with the fdisk command or run fdisk from format (delete  
the partition, create a new partition using 100% of the disk, blah,  
blah) ..

finally - glancing at the format output - there appears to be a mix of  
labels on these disks as you've got a mix c#d# entries and c#t#d#  
entries so i might suspect fdisk might not be consistent across the  
various disks here .. also noticed that you dumped the vtoc for c3d0  
and c4d0, but you're replacing c2d1 (of unknown size/layout) with c1d1  
(never dumped in your emails) .. so while this has been an animated  
(slightly trollish) discussion on right-sizing (odd - I've typically  
only seen that term as an ONTAPism) with some short-stroking digs ..  
it's a little unclear what the c1d1s0 slice looks like here or what  
the cylinder count is - i agree it should be the same - but it would  
be nice to see from my armchair here

On Jan 22, 2009, at 3:32 AM, Dale Sears wrote:

 Would this work?  (to get rid of an EFI label).

   dd if=/dev/zero of=/dev/dsk/thedisk bs=1024k count=1

 Then use

   format

 format might complain that the disk is not labeled.  You
 can then label the disk.

 Dale



 Antonius wrote:
 can you recommend a walk-through for this process, or a bit more of  
 a description? I'm not quite sure how I'd use that utility to  
 repair the EFI label
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Bug report: disk replacement confusion

2009-01-22 Thread Scott L. Burson
Well, the second resilver finished, and everything looks okay now.  Doing one 
more scrub to be sure...

-- Scott
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Failure to boot from zfs on Sun v880

2009-01-22 Thread Al Slater
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Mark J Musante wrote:
 On Thu, 22 Jan 2009, Al Slater wrote:
 Mounting root on rpool/ROOT/Sol11_b105 with filesystem type zfs is not
 supported
 
 This line is coming from svm, which leads me to believe that the zfs 
 boot blocks were not properly installed by live upgrade.
 
 You can try doing this by hand, with the command:
 
 installboot -F zfs /usr/platform/`uname -m`/lib/fs/zfs/bootblk 
 /dev/rdsk/c1t1d0s0
 
 But if live upgrade was unable to do that already, there may be other 
 issues to uncover once you actually do boot.

Unfortunately running the installboot command did not change anything.
Same message and same panic.

- --
Al Slater

Technical Director
SCL

Phone : +44 (0)1273 07
Fax   : +44 (0)1273 01
email : al.sla...@scluk.com

Stanton Consultancy Ltd
Pavilion House, 6-7 Old Steine, Brighton, East Sussex, BN1 1EJ
Registered in England Company number: 1957652 VAT number: GB 760 2433 55
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (SunOS)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkl5YOQACgkQz4fTOFL/EDarSgCeJ9r9Pa6F6gv6eqY1b9GsJQXW
PvcAn1JsMoIFEGb9R6x8aiJTOaUXQlXs
=9ha+
-END PGP SIGNATURE-

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Failure to boot from zfs on Sun v880

2009-01-22 Thread Al Slater
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Al Slater wrote:
 Mark J Musante wrote:
 On Thu, 22 Jan 2009, Al Slater wrote:
 Mounting root on rpool/ROOT/Sol11_b105 with filesystem type zfs is not
 supported
 This line is coming from svm, which leads me to believe that the zfs 
 boot blocks were not properly installed by live upgrade.
 
 You can try doing this by hand, with the command:
 
 installboot -F zfs /usr/platform/`uname -m`/lib/fs/zfs/bootblk 
 /dev/rdsk/c1t1d0s0
 
 But if live upgrade was unable to do that already, there may be other 
 issues to uncover once you actually do boot.
 
 Unfortunately running the installboot command did not change anything.
 Same message and same panic.
 

I have found the problem.  I had to remove the following line from
/etc/system in the new BE

rootdev:/pseudo/m...@0:0,13,blk

thanks

- --
Al Slater

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (SunOS)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkl5aFUACgkQz4fTOFL/EDbiswCdGiOUDze14knrUcBDdnsfp3Xb
spAAmQEDGg8vwSB77Psgxk37v5wYZTax
=8sOx
-END PGP SIGNATURE-

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SSD drives in Sun Fire X4540 or X4500 for dedicated ZIL device

2009-01-22 Thread Ross
I don't have an x4540, and this may not be relevant to your usage, but the 
concern I would have would be how this is going to affect throughput.  An x4540 
can stream data to and from the disk far faster than any SATA SSD, or even a 
pair of SATA SSD's can.  I'd be nervous about improving my latency at the cost 
of throughput.

I've read that ZFS is supposed to stream large writes directly to disk to avoid 
this, but I've also read about this not always working (no links I'm afraid, 
this was a while ago).

For an x4540, what I'd be watching are the PCIe SSD devices, and hoping that 
either Fusion-io or Micron release Solaris drivers for them.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SSD drives in Sun Fire X4540 or X4500 for dedicated ZIL device

2009-01-22 Thread Ross
However, now I've written that, Sun use SATA (SAS?) SSD's in their high end 
fishworks storage, so I guess it definately works for some use cases.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 'zfs recv' is very slow

2009-01-22 Thread Brent Jones
On Fri, Jan 9, 2009 at 11:41 PM, Brent Jones br...@servuhome.net wrote:
 On Fri, Jan 9, 2009 at 7:53 PM, Ian Collins i...@ianshome.com wrote:
 Ian Collins wrote:
 Send/receive speeds appear to be very data dependent.  I have several 
 different filesystems containing differing data types.  The slowest to 
 replicate is mail and my guess it's the changes to the index files that 
 takes the time.  Similar sized filesystems with similar deltas where files 
 are mainly added or deleted appear to replicate faster.


 Has anyone investigated this?  I have been replicating a server today
 and the differences between incremental processing is huge, for example:

 filesystem A:

 received 1.19Gb stream in 52 seconds (23.4Mb/sec)

 filesystem B:

 received 729Mb stream in 4564 seconds (164Kb/sec)

 I can delve further into the content if anyone is interested.

 --
 Ian.

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


 What hardware, to/from is this?

 How are those filesystems laid out, what is their total size, used
 space, and guessable file count / file size distribution?

 I'm also trying to put together the puzzle to provide more detail to a
 case I opened with Sun regarding this.

 --
 Brent Jones
 br...@servuhome.net


Just to update this, hope no one is tired of hearing about it. I just
image-updated to snv_105 to obtain patch for CR 6418042 at the
recommendation from a Sun support technician.

My results are much improved, on the order of 5-100 times faster
(either over Mbuffer or SSH). Not only do snapshots begin sending
right away (no longer requiring several minutes of reads before
sending any data), the actual send will sustain about 35-50MB/sec over
SSH, and up to 100MB/s via Mbuffer (on a single Gbit link, I am
network limited now, something I never thought I would say I love to
see!).

Previously, I was lucky if the snapshot would begin sending any data
after about 10 minutes, and once it did begin sending, it would
usually peak at about 1MB/sec via SSH, and up to 20MB/sec over
Mbuffer.
Mbuffer seems to play a much larger role now, as SSH appears to only
be single threaded for compression/encryption, peaking a single CPU
worth of power.
Mbuffers raw network performance saturates my Gigabit link, and making
me consider link bonding or something to see how fast it -really- can
go, now that the taps are open!

So, my issues appears pretty much resolved, although snv_105 is in the
/dev branch, things appear stable for the most part.

Please let me know if you have any questions, or want additional info
on my setup and testing.

Regards,

-- 
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 'zfs recv' is very slow

2009-01-22 Thread Ian Collins
Brent Jones wrote:
 On Fri, Jan 9, 2009 at 11:41 PM, Brent Jones br...@servuhome.net wrote:
   
 On Fri, Jan 9, 2009 at 7:53 PM, Ian Collins i...@ianshome.com wrote:
 
 Ian Collins wrote:
   
 Send/receive speeds appear to be very data dependent.  I have several 
 different filesystems containing differing data types.  The slowest to 
 replicate is mail and my guess it's the changes to the index files that 
 takes the time.  Similar sized filesystems with similar deltas where files 
 are mainly added or deleted appear to replicate faster.


 
 Has anyone investigated this?  I have been replicating a server today
 and the differences between incremental processing is huge, for example:

 filesystem A:

 received 1.19Gb stream in 52 seconds (23.4Mb/sec)

 filesystem B:

 received 729Mb stream in 4564 seconds (164Kb/sec)

 I can delve further into the content if anyone is interested.

 --
 Ian.

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

   
 What hardware, to/from is this?

 How are those filesystems laid out, what is their total size, used
 space, and guessable file count / file size distribution?

 I'm also trying to put together the puzzle to provide more detail to a
 case I opened with Sun regarding this.

 --
 Brent Jones
 br...@servuhome.net

 

 Just to update this, hope no one is tired of hearing about it. I just
 image-updated to snv_105 to obtain patch for CR 6418042 at the
 recommendation from a Sun support technician.

 My results are much improved, on the order of 5-100 times faster
 (either over Mbuffer or SSH). Not only do snapshots begin sending
 right away (no longer requiring several minutes of reads before
 sending any data), the actual send will sustain about 35-50MB/sec over
 SSH, and up to 100MB/s via Mbuffer (on a single Gbit link, I am
 network limited now, something I never thought I would say I love to
 see!).
   
Thanks for the heads up Brent, I'll have to sweet talk one of my former
clients into running OpenSolaris on their x4540s.  Anyone know if
NetVault is supported on OpenSolaris?

Do any of the Sun folks know if these update will be back-ported to
Solaris 10 in a patch or update release?

-- 
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss