[zfs-discuss] zfs write cache enable on boot disks ?

2008-04-23 Thread Par Kansala
Title: Signature




Hi,

Will the upcoming zfs boot capabilities also enable write cache on a
boot disk
like it does on regular data disks (when whole disks are used) ?

//Par
-- 



-- 

  

  
  Pr Knsl
OEM Engagement Architect
  Sun Microsystems
Phone +46 8 631 1782 (x45782)
Mobile +46 70 261 1782
Fax +46 455 37 92 05
Email [EMAIL PROTECTED]
  


  

  




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS for write-only media?

2008-04-23 Thread Joerg Schilling
Dana H. Myers [EMAIL PROTECTED] wrote:

 Bob Friesenhahn wrote:
  Are there any plans to support ZFS for write-only media such as 
  optical storage?  It seems that if mirroring or even zraid is used 
  that ZFS would be a good basis for long term archival storage.
 I'm just going to assume that write-only here means write-once,
 read-many, since it's far too late for an April Fool's joke.

I know two write-only device types:

WOM Write-only media
WORNWrite-once read never (this one is often used for backups ;-)

Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS for write-only media?

2008-04-23 Thread Joerg Schilling
Bob Friesenhahn [EMAIL PROTECTED] wrote:

 On Mon, 21 Apr 2008, Dana H. Myers wrote:

  Bob Friesenhahn wrote:
  Are there any plans to support ZFS for write-only media such as optical 
  storage?  It seems that if mirroring or even zraid is used that ZFS would 
  be a good basis for long term archival storage.
  I'm just going to assume that write-only here means write-once,
  read-many, since it's far too late for an April Fool's joke.

 Yes, of course.  Such as to CD-R, DVD-RW, or more exotic technologies 
 such as holographic drives (300GB drives are on the market). For 
 example, with two CD-R drives it should be possible to build a ZFS 
 mirror on two CDs, but the I/O to these devices may need to be done in 
 a linear sequential fashion at a rate sufficient to keep the writer 
 happy, so temporary files (or memory-based buffering) likely need to 
 be used.

CD-R media is not really WORM (write once read many) as CD-R does not allow
to write _every_ sector exactly once. Due to the way scrambeled error correction
is implemented, there is always unusable 7 sectors between two areas on the 
medium that have been written independently.

Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs data corruption

2008-04-23 Thread Vic Engle
I'm hoping someone can help me understand a zfs data corruption symptom. We 
have a zpool with checksum turned off. Zpool status shows that data corruption 
occured. The application using the pool at the time reported a read error and 
zoppl status (see below) shows 2 read errors on a device. The thing that is 
confusing to me is how ZFS determines that data corruption exists when reading 
data from a pool with checkdum turned off.

Also, I'm wondering about the persistent errors in the output below. Since no 
specific file or directory is mentioned does this indicate pool metadata is 
corrupt?

Thanks for any help interpreting the output...


# zpool status -xv
  pool: zpool1
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
zpool1   ONLINE   2 0 0
  c4t60A9800043346859444A476B2D48446Fd0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D484352d0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D484236d0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D482D6Cd0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D483951d0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D483836d0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D48366Bd0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D483551d0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D483435d0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D48326Bd0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D483150d0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D483035d0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D47796Ad0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D477850d0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D477734d0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D47756Ad0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D47744Fd0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D477333d0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D477169d0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D47704Ed0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D476F33d0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D476D68d0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D476C4Ed0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D476B32d0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D476968d0  ONLINE   0 0 0
  c4t60A98000433468656834476B2D453974d0  ONLINE   0 0 0
  c4t60A98000433468656834476B2D454142d0  ONLINE   0 0 0
  c4t60A98000433468656834476B2D454255d0  ONLINE   0 0 0
  c4t60A98000433468656834476B2D45436Dd0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D487346d0  ONLINE   2 0 0
  c4t60A9800043346859444A476B2D487175d0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D48705Ad0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D486F45d0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D486D74d0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D486C5Ad0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D486B44d0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D486974d0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D486859d0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D486744d0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D486573d0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D486459d0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D486343d0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D486173d0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D482F58d0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D485A43d0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D485872d0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D485758d0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D485642d0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D485471d0  ONLINE   0 0 0
  c4t60A9800043346859444A476B2D485357d0  ONLINE   0 0 0
  

Re: [zfs-discuss] zfs data corruption

2008-04-23 Thread Nathan Kroenert
I'm just taking a stab here, so could be completely wrong, but IIRC, 
even if you disable checksum, it still checksums the metadata...

So, it could be metadata checksum errors.

Others on the list might have some funky zdb thingies you could to see 
what it actually is...

Note: typed pre caffeine... :)

Nathan

Vic Engle wrote:
 I'm hoping someone can help me understand a zfs data corruption symptom. We 
 have a zpool with checksum turned off. Zpool status shows that data 
 corruption occured. The application using the pool at the time reported a 
 read error and zoppl status (see below) shows 2 read errors on a device. 
 The thing that is confusing to me is how ZFS determines that data corruption 
 exists when reading data from a pool with checkdum turned off.
 
 Also, I'm wondering about the persistent errors in the output below. Since no 
 specific file or directory is mentioned does this indicate pool metadata is 
 corrupt?
 
 Thanks for any help interpreting the output...
 
 
 # zpool status -xv
   pool: zpool1
  state: ONLINE
 status: One or more devices has experienced an error resulting in data
 corruption.  Applications may be affected.
 action: Restore the file in question if possible.  Otherwise restore the
 entire pool from backup.
see: http://www.sun.com/msg/ZFS-8000-8A
  scrub: none requested
 config:
 
 NAME STATE READ WRITE CKSUM
 zpool1   ONLINE   2 0 0
   c4t60A9800043346859444A476B2D48446Fd0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D484352d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D484236d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D482D6Cd0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D483951d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D483836d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D48366Bd0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D483551d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D483435d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D48326Bd0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D483150d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D483035d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D47796Ad0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D477850d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D477734d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D47756Ad0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D47744Fd0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D477333d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D477169d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D47704Ed0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D476F33d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D476D68d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D476C4Ed0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D476B32d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D476968d0  ONLINE   0 0 0
   c4t60A98000433468656834476B2D453974d0  ONLINE   0 0 0
   c4t60A98000433468656834476B2D454142d0  ONLINE   0 0 0
   c4t60A98000433468656834476B2D454255d0  ONLINE   0 0 0
   c4t60A98000433468656834476B2D45436Dd0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D487346d0  ONLINE   2 0 0
   c4t60A9800043346859444A476B2D487175d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D48705Ad0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D486F45d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D486D74d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D486C5Ad0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D486B44d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D486974d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D486859d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D486744d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D486573d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D486459d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D486343d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D486173d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D482F58d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D485A43d0  ONLINE   0 0 0
   

Re: [zfs-discuss] zfs data corruption

2008-04-23 Thread Rob
  Since no specific file or directory is mentioned
install newer bits and get better info automatically
but for now type:

zdb -vvv zpool1 17
zdb -vvv zpool1 18
zdb -vvv zpool1 19
echo remove those objects
zpool clear zpool1
zpool scrub zpool1

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs data corruption

2008-04-23 Thread Victor Engle
Thanks! That would explain things. I don't believe it was a real disk
read error because of the absence of evidence in /var/adm/messages.

I'll review the man page and documentation to confirm that metadata is
checksummed.

Regards,
Vic


On Wed, Apr 23, 2008 at 6:30 PM, Nathan Kroenert
[EMAIL PROTECTED] wrote:
 I'm just taking a stab here, so could be completely wrong, but IIRC, even if
 you disable checksum, it still checksums the metadata...

  So, it could be metadata checksum errors.

  Others on the list might have some funky zdb thingies you could to see what
 it actually is...

  Note: typed pre caffeine... :)

  Nathan



  Vic Engle wrote:

  I'm hoping someone can help me understand a zfs data corruption symptom.
 We have a zpool with checksum turned off. Zpool status shows that data
 corruption occured. The application using the pool at the time reported a
 read error and zoppl status (see below) shows 2 read errors on a device.
 The thing that is confusing to me is how ZFS determines that data corruption
 exists when reading data from a pool with checkdum turned off.
 
  Also, I'm wondering about the persistent errors in the output below. Since
 no specific file or directory is mentioned does this indicate pool metadata
 is corrupt?
 
  Thanks for any help interpreting the output...
 
 
  # zpool status -xv
   pool: zpool1
   state: ONLINE
  status: One or more devices has experienced an error resulting in data
 corruption.  Applications may be affected.
  action: Restore the file in question if possible.  Otherwise restore the
 entire pool from backup.
see: http://www.sun.com/msg/ZFS-8000-8A
   scrub: none requested
  config:
 
 NAME STATE READ WRITE CKSUM
 zpool1   ONLINE   2 0 0
   c4t60A9800043346859444A476B2D48446Fd0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D484352d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D484236d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D482D6Cd0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D483951d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D483836d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D48366Bd0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D483551d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D483435d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D48326Bd0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D483150d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D483035d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D47796Ad0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D477850d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D477734d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D47756Ad0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D47744Fd0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D477333d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D477169d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D47704Ed0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D476F33d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D476D68d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D476C4Ed0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D476B32d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D476968d0  ONLINE   0 0 0
   c4t60A98000433468656834476B2D453974d0  ONLINE   0 0 0
   c4t60A98000433468656834476B2D454142d0  ONLINE   0 0 0
   c4t60A98000433468656834476B2D454255d0  ONLINE   0 0 0
   c4t60A98000433468656834476B2D45436Dd0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D487346d0  ONLINE   2 0 0
   c4t60A9800043346859444A476B2D487175d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D48705Ad0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D486F45d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D486D74d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D486C5Ad0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D486B44d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D486974d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D486859d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D486744d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D486573d0  ONLINE   0 0 0
   c4t60A9800043346859444A476B2D486459d0  ONLINE   0 0 

Re: [zfs-discuss] Thumper / X4500 marvell driver issues

2008-04-23 Thread Lida Horn
Carson Gaspar wrote:
 [ Sending this here, as I've publicly complained about this bug on the 
 ZFS list previously, and there have been prior threads related to the 
 fix hitting OpenSolaris ]

 For those of you who have been suffering marvell device resets and hung 
 I/Os on Sol 10 U4 with NCQ enabled, you should talk to your Sun support 
 folks about IDR137601-02. We have one server that could consistently 
 reproduce the problem, and it's been running since Friday with NCQ 
 enabled and no hangs after applying the IDR. We had tested previous IDRs 
 that did _not_ fix the problem, but so far, so good...
   
That is good to hear.
 While we had a rough start with support on this issue, kudos to the 
 engineers for finally tracking down this heisenbug (fingers crossed...), 
 and here's hoping it makes it into a public patch soon.
   
I (we) appreciate your kind words and hope that you have no further 
issues.  By the way,
all the additional issues were related to hardware error cases such as 
sector I/O errors, which
is why they were so difficult to track down and reproduce.

Regards,
Lida

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss