Re: [zfs-discuss] ZFS on Hitachi SAN, pool recovery

2008-09-24 Thread alan baldwin
Thanks to all for your comments and sharing your experiences.

In my setup the pools are split and then NFS mounted to other nodes, mostly 
Oracle DB boxes. These mounts will provide areas for RMAN Flash backups to be 
written.
If I lose connectivity to any host I will swing the luns over to the alternate 
host and the NFS mount will be repointed on the Oracle node, so 
[u]hopefully[/u] we should be safe with regards pool corruption.

Thanks again.
Max
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS with Fusion-IO?

2008-09-24 Thread Ross
I agree, it looks like it would be perfect, but unfortunately without Solaris 
drivers it's pretty much a non starter.

That hasn't stopped me pestering Fusion-IO wherever I can though to see if they 
are willing to develop Solaris drivers, almost everywhere I've seen these 
reviewed there have been comments about how good they would be for ZFS.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Can't remove zpool spare, status says faulted

2008-09-24 Thread Joe Crain
When I issue the zpool remove command on the spare I receive no response, good 
or bad. Afterwards the drive is still listed as a spare in the zpool. Zpool 
shows the spare is listed as FAULTED. Any ideas?

$ zpool status
pool: datapool1
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
datapool1 ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c8t1d2 ONLINE 0 0 0
c8t1d3 ONLINE 0 0 0
c8t1d4 ONLINE 0 0 0
spares
c8t1d5 FAULTED corrupted data

$ zpool remove datapool1 c8t1d5
$
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] resilver being killed by 'zpool status' when root

2008-09-24 Thread Blake Irvin
I was doing a manual resilver, not with spares.  I suspect still the issue 
comes from your script running as root, which is common for reporting scripts.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on Hitachi SAN, pool recovery

2008-09-24 Thread Marcelo Leal
Just curiosity, why donĀ“t use SC?

 Leal.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Which is better for root ZFS: mlc or slc SSD?

2008-09-24 Thread Erik Trimble
I was under the impression that MLC is the preferred type of SSD, but I
want to prevent myself from having a think-o.


I'm looking to get (2) SSD to use as my boot drive. It looks like I can
get 32GB SSDs composed of either SLC or MLC for roughly equal pricing.
Which would be the better technology?  (I'll worry about rated access
times/etc of the drives, I'm just wondering about general tech for an OS
boot drive usage...)



-- 
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Which is better for root ZFS: mlc or slc SSD?

2008-09-24 Thread Neal Pollack
Erik Trimble wrote:
 I was under the impression that MLC is the preferred type of SSD, but I
 want to prevent myself from having a think-o.


 I'm looking to get (2) SSD to use as my boot drive. It looks like I can
 get 32GB SSDs composed of either SLC or MLC for roughly equal pricing.
 Which would be the better technology?  (I'll worry about rated access
 times/etc of the drives, I'm just wondering about general tech for an OS
 boot drive usage...)



   

SLC is faster and typically more expensive.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Which is better for root ZFS: mlc or slc SSD?

2008-09-24 Thread Bob Friesenhahn
On Wed, 24 Sep 2008, Erik Trimble wrote:

 I was under the impression that MLC is the preferred type of SSD, but I
 want to prevent myself from having a think-o.

SLC = Single level
MLC = Multi level

Since the SLC stores only a binary value rather than several possible 
encoded values it becomes more reliable but stores less data per cell.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Which is better for root ZFS: mlc or slc SSD?

2008-09-24 Thread Rich Teer
On Wed, 24 Sep 2008, Erik Trimble wrote:

 I was under the impression that MLC is the preferred type of SSD, but I
 want to prevent myself from having a think-o.

Depends on what one prefers, I guess.  :-)

SLC is prefered for performance reasons, MLC tends to be cheaper.
I installed an SLC SSD in my Ferrari 3400.  It was SIGNIFICANTLY
faster than the 7200RPM spinning rust it replaced (which was no
slouch itself).

 times/etc of the drives, I'm just wondering about general tech for an OS
 boot drive usage...)

FWIW, I'd say go with SLC.

-- 
Rich Teer, SCSA, SCNA, SCSECA

CEO,
My Online Home Inventory

URLs: http://www.rite-group.com/rich
  http://www.linkedin.com/in/richteer
  http://www.myonlinehomeinventory.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Which is better for root ZFS: mlc or slc SSD?

2008-09-24 Thread Tim
On Wed, Sep 24, 2008 at 1:41 PM, Erik Trimble [EMAIL PROTECTED] wrote:

 I was under the impression that MLC is the preferred type of SSD, but I
 want to prevent myself from having a think-o.


 I'm looking to get (2) SSD to use as my boot drive. It looks like I can
 get 32GB SSDs composed of either SLC or MLC for roughly equal pricing.
 Which would be the better technology?  (I'll worry about rated access
 times/etc of the drives, I'm just wondering about general tech for an OS
 boot drive usage...)


Depends on the MFG.  The new Intel MLC's have proven to be as fast if not
faster than the SLC's, but they also cost just as much.  If they brought the
price down, I'd say MLC all the way.  All other things being equal though,
SLC.


--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Which is better for root ZFS: mlc or slc SSD?

2008-09-24 Thread Mike Gerdts
On Wed, Sep 24, 2008 at 1:41 PM, Erik Trimble [EMAIL PROTECTED] wrote:
 I was under the impression that MLC is the preferred type of SSD, but I
 want to prevent myself from having a think-o.

MLC - description as to why can be found in

http://mags.acm.org/communications/200807/

See Flash Storage Memory by Adam Leventhal, page 47.

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Which is better for root ZFS: mlc or slc SSD?

2008-09-24 Thread Neal Pollack

Tim wrote:



On Wed, Sep 24, 2008 at 1:41 PM, Erik Trimble [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


I was under the impression that MLC is the preferred type of SSD,
but I
want to prevent myself from having a think-o.


I'm looking to get (2) SSD to use as my boot drive. It looks like
I can
get 32GB SSDs composed of either SLC or MLC for roughly equal pricing.
Which would be the better technology?  (I'll worry about rated access
times/etc of the drives, I'm just wondering about general tech for
an OS
boot drive usage...)


Depends on the MFG.  The new Intel MLC's have proven to be as fast if 
not faster than the SLC's,


That is not comparing apples to apples.   The new Intel MLCs take the 
slower, lower cost MLC chips,
and put them in parallel channels connected to an internal controller 
chip (think of RAID striping).

That way, they get large aggregate speeds for less total cost.
Other vendors will start to follow this idea.

But if you just take a raw chip in one channel, SLC is faster.

And, in the end, yes, the new intel SSDs are very nice.

but they also cost just as much.  If they brought the price down, I'd 
say MLC all the way.  All other things being equal though, SLC.



--Tim


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS with Fusion-IO?

2008-09-24 Thread Wade . Stuart
[EMAIL PROTECTED] wrote on 09/24/2008 05:54:45 AM:

 I agree, it looks like it would be perfect, but unfortunately
 without Solaris drivers it's pretty much a non starter.

 That hasn't stopped me pestering Fusion-IO wherever I can though to
 see if they are willing to develop Solaris drivers, almost
 everywhere I've seen these reviewed there have been comments about
 how good they would be for ZFS.


Maybe you should stop pestering,  as I just received a cold call from them
made possible by them harvesting info on this list.

-Wade

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS dump and swap

2008-09-24 Thread John Cecere
The man page for dumpadm says this:

A given ZFS volume cannot be configured for both the swap area and the dump 
device.

And indeed when I try to use a zvol as both, I get:

zvol cannot be used as a swap device and a dump device

My question is, why not ?

Thanks,
John
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Which is better for root ZFS: mlc or slc SSD?

2008-09-24 Thread Scott Laird
In general, I think SLC is better, but there are a number of brand-new
MLC devices on the market that are really fast; until a new generation
of SLC devices show up, the MLC drives kind of win by default.

Intel's supposed to have a SLC drive showing up early next year that
has similar read performance to their new MLC device, but with 2x the
write speed, but that's at least 3 months out.


Scott

On Wed, Sep 24, 2008 at 12:16 PM, Neal Pollack [EMAIL PROTECTED] wrote:
 Tim wrote:

 On Wed, Sep 24, 2008 at 1:41 PM, Erik Trimble [EMAIL PROTECTED] wrote:

 I was under the impression that MLC is the preferred type of SSD, but I
 want to prevent myself from having a think-o.


 I'm looking to get (2) SSD to use as my boot drive. It looks like I can
 get 32GB SSDs composed of either SLC or MLC for roughly equal pricing.
 Which would be the better technology?  (I'll worry about rated access
 times/etc of the drives, I'm just wondering about general tech for an OS
 boot drive usage...)


 Depends on the MFG.  The new Intel MLC's have proven to be as fast if not
 faster than the SLC's,

 That is not comparing apples to apples.   The new Intel MLCs take the
 slower, lower cost MLC chips,
 and put them in parallel channels connected to an internal controller chip
 (think of RAID striping).
 That way, they get large aggregate speeds for less total cost.
 Other vendors will start to follow this idea.

 But if you just take a raw chip in one channel, SLC is faster.

 And, in the end, yes, the new intel SSDs are very nice.

 but they also cost just as much.  If they brought the price down, I'd say
 MLC all the way.  All other things being equal though, SLC.


 --Tim

 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS dump and swap

2008-09-24 Thread Darren J Moffat
John Cecere wrote:
 The man page for dumpadm says this:
 
 A given ZFS volume cannot be configured for both the swap area and the dump 
 device.
 
 And indeed when I try to use a zvol as both, I get:
 
 zvol cannot be used as a swap device and a dump device
 
 My question is, why not ?

Swap is a normal ZVOL and subject to COW, checksum, compression (and 
coming soon encryption).

Dump ZVOLs are preallocated contiguous space that are written to 
directly by the ldi_dump routines, they aren't written to by normal ZIO 
transactions, they aren't checksum'd - the compression is done by the 
dump layer not by ZFS.  This is needed because when we are writing a 
crash dump we want as little as possible in IO the stack.

--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS dump and swap

2008-09-24 Thread John Cecere
Darren,

Thanks for the explanation. Would you object if I opened a bug on the zfs man 
page to include what you've written here ?

Thanks again,
John


Darren J Moffat wrote:
 John Cecere wrote:
 The man page for dumpadm says this:

 A given ZFS volume cannot be configured for both the swap area and the 
 dump device.

 And indeed when I try to use a zvol as both, I get:

 zvol cannot be used as a swap device and a dump device

 My question is, why not ?
 
 Swap is a normal ZVOL and subject to COW, checksum, compression (and 
 coming soon encryption).
 
 Dump ZVOLs are preallocated contiguous space that are written to 
 directly by the ldi_dump routines, they aren't written to by normal ZIO 
 transactions, they aren't checksum'd - the compression is done by the 
 dump layer not by ZFS.  This is needed because when we are writing a 
 crash dump we want as little as possible in IO the stack.
 
 -- 
 Darren J Moffat

-- 
John Cecere
Americas Technology Office / Sun Microsystems
732-302-3922 / [EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS dump and swap

2008-09-24 Thread Darren J Moffat
John Cecere wrote:
 Darren,
 
 Thanks for the explanation. Would you object if I opened a bug on the 
 zfs man page to include what you've written here ?

I don't know if what I said is considered implementation detail or not. 
  Feel free to log the bug but I can't say either way if it would be 
considered appropriate for the man page.

--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] web interface not showing up

2008-09-24 Thread James Andrewartha
mike wrote:
 On Sun, Sep 21, 2008 at 11:49 PM, Volker A. Brandt [EMAIL PROTECTED] wrote:
 
 Hmmm... I run Solaris 10/sparc U4.  My /usr/java points to
 jdk/jdk1.5.0_16.  I am using Firefox 2.0.0.16.  Works For Me(TM)  ;-)
 Sorry, can't help you any further.  Maybe a question for desktop-discuss?
 
 it's a java error on the server side, not client side (although there
 is a javascript error in every browser i tried it in, but probably
 unrelated or an error due to the java not executing properly)
 
 anyway - you did help me at least get the webconsole running. the zfs
 admin piece of it though is throwing the java error...

Can you post the java error to the list? Do you have gzip compressed or
aclinherit properties on your filesystems, hitting bug 6715550?
http://mail.opensolaris.org/pipermail/zfs-discuss/2008-June/048457.html
http://mail.opensolaris.org/pipermail/zfs-discuss/2008-June/048550.html

-- 
James Andrewartha
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] web interface not showing up

2008-09-24 Thread mike
On Wed, Sep 24, 2008 at 9:37 PM, James Andrewartha [EMAIL PROTECTED] wrote:

 Can you post the java error to the list? Do you have gzip compressed or
 aclinherit properties on your filesystems, hitting bug 6715550?
 http://mail.opensolaris.org/pipermail/zfs-discuss/2008-June/048457.html
 http://mail.opensolaris.org/pipermail/zfs-discuss/2008-June/048550.html

Loading the page in Firefox 2.x on Windows XP Pro, using URL
https://192.168.1.202:6789/zfs/zfsmodule/Index

I can login to the web console, my only option is zfs administration.
I click on it, and the left frame displays a java error. The right
frame is empty.


Application Error
com.iplanet.jato.NavigationException: Exception encountered during forward
Root cause = [java.lang.IllegalArgumentException: No enum const class
com.sun.zfs.common.model.AclInheritProperty$AclInherit.restricted]
Notes for application developers:

* To prevent users from seeing this error message, override the
onUncaughtException() method in the module servlet and take action
specific to the application
* To see a stack trace from this error, see the source for this page

Generated Wed Sep 24 21:38:15 PDT 2008



[EMAIL PROTECTED] ~]# zfs get aclinherit tank
NAME  PROPERTYVALUESOURCE
tank  aclinherit  restricted   default

Looks like changing that to passthrough worked. Thanks. I didn't
really research this much. Not enough time, didn't -really- need it.
But this will be fun to explore. Thanks for following up :)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS dump and swap

2008-09-24 Thread Kyle McDonald
Darren J Moffat wrote:
 John Cecere wrote:
   
 The man page for dumpadm says this:

 A given ZFS volume cannot be configured for both the swap area and the dump 
 device.

 And indeed when I try to use a zvol as both, I get:

 zvol cannot be used as a swap device and a dump device

 My question is, why not ?
 

 Swap is a normal ZVOL and subject to COW, checksum, compression (and 
 coming soon encryption).

   
Would there be no performance benefits from having swap read/write from 
contiguous preallocated space also?

I do realize that nifty features like encryption might be lost in that 
case, but Im wondering if there's any performance to be gained?

Then again if you're concerned about performance you need to just buy 
ram till you stop swapping all together, huh?

   -Kyle

 Dump ZVOLs are preallocated contiguous space that are written to 
 directly by the ldi_dump routines, they aren't written to by normal ZIO 
 transactions, they aren't checksum'd - the compression is done by the 
 dump layer not by ZFS.  This is needed because when we are writing a 
 crash dump we want as little as possible in IO the stack.

 --
 Darren J Moffat
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool file corruption

2008-09-24 Thread Mikael Karlsson
I have a strange problem involving changes in large file on a mirrored 
zpool in
Open solaris snv96.
We use it at storage in a VMware ESXi lab environment. All virtual disk 
files gets
corrupted when changes are made within the files (when running the 
machine that is).

The sad thing is that I've created about ~200Gb of random data in 
large files and
even modified those files without any problem (using dd with skip and 
conv=notrunc options).
I've copied the files within the pool and over the network on all 
network interfaces
on the machine - without problems.

It's just those .vmdk files that gets corrupted.

The hardware is an Opteron desktop machine with a SIL3114 sata 
interface. Personally I have exactly
the same interface at home with the same setup without problem. Only the 
other hardware differs (disks and so on).

The disks are WD7500AACS, which is those with variable rotation speed 
5400-7200. Could it
be the disks? Could it be the disk controller or the rest of the 
hardware?? I should mention that the
controller has been flashed with a non-raid bios.

I could provide more information if needed! Is there anyone that have 
any ideas or suggestions?


Some output:

bash-3.00# zpool status -vx
  pool: testing
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: scrub completed with 1 errors on Wed Sep 24 16:59:13 2008
config:

NAMESTATE READ WRITE CKSUM
testing ONLINE   0 016
  mirrorONLINE   0 016
c0d1ONLINE   0 051
c1d1ONLINE   0 054

errors: Permanent errors have been detected in the following files:

/testing/ZFS-problem/ZFS-problem-flat.vmdk


Regards

Mikael
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss