Re: [zfs-discuss] zfs-discuss Digest, Vol 33, Issue 19

2008-07-09 Thread Ross
Hi Neil,

No problem, thanks for the info.  I knew these cards were a gamble, but if I 
can get them working, it will be worth it.

I've today spoken to the UK office of VMetro about drivers but I'm not holding 
out too much hope.  They were very friendly and polite, but explained that they 
simply don't support these for end users, it's for large OEM's only.

However, I did do a bit of digging this morning and found the e-mail address of 
Micro Memory's lead software developer, who looks to be the chap responsible 
for developing the Solaris drivers in the first place.  So I'll be dropping him 
a line shortly and seeing if he can help at all.

Ross
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] slog device

2008-07-09 Thread Ross
I'm not personally aware of any.  The ioDrive from Fusion-io looks the most 
promising, but it's a new product from a new company so it's likely to be a 
while (if ever) before Solaris drivers appear.  I've contacted them to ask 
about Solaris drivers, but haven't had a response yet.

I summarised my findings in a long post about 2/3 of the way down this thread:
http://www.opensolaris.org/jive/thread.jspa?threadID=65074tstart=30
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] slog device

2008-07-09 Thread Ross
PS.  I note on the Fusion-io web page that they're working with HP to 
accelerate their servers.  Would be nice if somebody from Sun could do the same 
(or let us know if Sun are working on similar technology).
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] confusion and frustration with zpool

2008-07-09 Thread Al Hopper
On Tue, Jul 8, 2008 at 8:56 AM, Darren J Moffat [EMAIL PROTECTED] wrote:
 Pete Hartman wrote:
 I'm curious which enclosures you've had problems with?

 Mine are both Maxtor One Touch; the 750 is slightly different in that it has 
 a FireWire port as well as USB.

 I've had VERY bad experiences with the Maxtor One Touch and ZFS.  To the
 point that we gave up trying to use them.  We last tried on snv_79 though.


I've had bad experiences with the Seagate products.  Last time I read
a bunch of customer reviews on newegg.com and it seemed to be split
between those with no issues and those with failures.  My guess is
that it's related to duty cycle - casual users who really don't beat
up on the drive will have no problems, while power users will
probably kill the drive.  If my guess is correct, it's simply physics
- lack of airflow over the HDA (head disk assembly).

Regards,

-- 
Al Hopper Logical Approach Inc,Plano,TX [EMAIL PROTECTED]
 Voice: 972.379.2133 Timezone: US CDT
OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Thumper wedged somewhere in ZFS

2008-07-09 Thread Ceri Davies
Forwarding here, as suggested by chaps on storage-discuss.

Just to clarify, I was running filebench directly on the x4500, not from
an initiator, so this is probably not a COMSTAR thing.

Ceri
-- 
That must be wonderful!  I don't understand it at all.
  -- Moliere
---BeginMessage---
We've got an x4500 running SXCE build 91 with stmf configured to share
out a (currently) small number (9) of LUs to a (currently) small number
of hosts (4).

The x4500 is configured with ZFS root mirror, 6 RAIDZ sets across all
six controllers, some hot spares in the gaps and a RAID10 set to use
everything else up.

Since this is an investigative setup, I have been running filebench
locally on the x4500 to get some stats before moving on to do the same
on the initiators against the x4500 and our current storage.

While running the filebench OLTP workload with $filesize=5g on one of
the RAIDZ pools, the x4500 seemed to hang while creating the fileset.
On further investigation, a lot of things actually still worked; log in
via SSH was fine, /usr/bin/ps worked ok, /usr/ucb/ps and any of the
/usr/proc ptools just hung, man hung, and so on.  savecore -L managed
to do a dump but couldn't seem to exit.

So I did a hard reset, the system came up fine and I actually do have
the dump from savecore -L.  I'm kind of out of my depth with mdb, but
it looks pretty clear to me that all of the hung processes were
somewhere in ZFS:

# mdb -k unix.0 vmcore.0 
mdb: failed to read panicbuf and panic_reg -- current register set will
be unavailable
Loading modules: [ unix genunix specfs dtrace cpu.generic
cpu_ms.AuthenticAMD.15 uppc pcplusmp scsi_vhci zfs sd ip hook neti sctp
arp usba fctl nca lofs md cpc random crypto nfs fcip logindmux ptm nsctl
ufs sppp ipc ]
 ::memstat
Page SummaryPagesMB  %Tot
     
Kernel3085149 12051   74%
Anon20123780%
Exec and libs3565130%
Page cache 200779   7845%
Free (cachelist)   193955   7575%
Free (freelist)663990  2593   16%

Total 4167561 16279
Physical  4167560 16279
 ::pgrep ptree
SPID   PPID   PGIDSIDUID  FLAGS ADDR NAME
R   1825   1820   1825   1803  0 0x4a004000 ff04f5096c80 ptree
R   1798   1607   1798   1607  15000 0x4a004900 ff04f7b72930 ptree
R   1795   1302   1795   1294  0 0x4a004900 ff05179f7de0 ptree
 ::pgrep ptree | ::walk thread | ::findstack
stack pointer for thread ff04ea2ca440: ff00201777d0
[ ff00201777d0 _resume_from_idle+0xf1() ]
  ff0020177810 swtch+0x17f()
  ff00201778b0 turnstile_block+0x752()
  ff0020177920 rw_enter_sleep+0x1b0()
  ff00201779f0 zfs_getpage+0x10e()
  ff0020177aa0 fop_getpage+0x9f()
  ff0020177c60 segvn_fault+0x9ef()
  ff0020177d70 as_fault+0x5ae()
  ff0020177df0 pagefault+0x95()
  ff0020177f00 trap+0xbd3()
  ff0020177f10 0xfb8001d9()
stack pointer for thread ff04e8752400: ff001f9307d0
[ ff001f9307d0 _resume_from_idle+0xf1() ]
  ff001f930810 swtch+0x17f()
  ff001f9308b0 turnstile_block+0x752()
  ff001f930920 rw_enter_sleep+0x1b0()
  ff001f9309f0 zfs_getpage+0x10e()
  ff001f930aa0 fop_getpage+0x9f()
  ff001f930c60 segvn_fault+0x9ef()
  ff001f930d70 as_fault+0x5ae()
  ff001f930df0 pagefault+0x95()
  ff001f930f00 trap+0xbd3()
  ff001f930f10 0xfb8001d9()
stack pointer for thread ff066fbc6a80: ff001f27de90
[ ff001f27de90 _resume_from_idle+0xf1() ]
  ff001f27ded0 swtch+0x17f()
  ff001f27df00 cv_wait+0x61()
  ff001f27e040 vmem_xalloc+0x602()
  ff001f27e0b0 vmem_alloc+0x159()
  ff001f27e140 segkmem_xalloc+0x8c()
  ff001f27e1a0 segkmem_alloc_vn+0xcd()
  ff001f27e1d0 segkmem_zio_alloc+0x20()
  ff001f27e310 vmem_xalloc+0x4fc()
  ff001f27e380 vmem_alloc+0x159()
  ff001f27e410 kmem_slab_create+0x7d()
  ff001f27e450 kmem_slab_alloc+0x57()
  ff001f27e4b0 kmem_cache_alloc+0x136()
  ff001f27e4d0 zio_data_buf_alloc+0x28()
  ff001f27e510 arc_get_data_buf+0x175()
  ff001f27e560 arc_buf_alloc+0x9a()
  ff001f27e610 arc_read+0x122()
  ff001f27e6b0 dbuf_read_impl+0x129()
  ff001f27e710 dbuf_read+0xc5()
  ff001f27e7c0 dmu_buf_hold_array_by_dnode+0x1c4()
  ff001f27e860 dmu_read+0xd4()
  ff001f27e910 zfs_fillpage+0x15e()
  ff001f27e9f0 zfs_getpage+0x187()
  ff001f27eaa0 fop_getpage+0x9f()
  ff001f27ec60 segvn_fault+0x9ef()
  ff001f27ed70 as_fault+0x5ae()
  ff001f27edf0 pagefault+0x95()
  ff001f27ef00 trap+0xbd3()
  ff001f27ef10 0xfb8001d9()
 ::pgrep go_filebench | ::walk thread | ::findstack
stack pointer for thread 

Re: [zfs-discuss] slog device

2008-07-09 Thread Richard Elling
Ross wrote:
 PS.  I note on the Fusion-io web page that they're working with HP to 
 accelerate their servers.  Would be nice if somebody from Sun could do the 
 same (or let us know if Sun are working on similar technology).
   

I thought the cat was already out of the bag... :-)
http://blogs.sun.com/jonathan/entry/not_a_flash_in_the
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs-discuss Digest, Vol 33, Issue 19

2008-07-09 Thread Miles Nordin
 r == Ross  [EMAIL PROTECTED] writes:
 np == Neil Perrin [EMAIL PROTECTED] writes:

np 2. I received the board and driver from another group within
np Sun.  It would be better to contact Micro Memory (or whoever
np took them over) directly, as it's not my place to give out 3rd
np party drivers or provide support for them.

Then hopefully when Sun releases their new batch of SSD devices, they
will release source for the full driver stack under a redistributable
license so that no well-meaning geek has to be in your awkwardly
unhelpful position, caught between obligations of
NDA/copyright/``place'' and the basic and reasonable obligations
necessary to maintain a ``community''.  

I've heard Sun people at users' groups promise that all new Solaris
subsystems will include source, but so far this doesn't apply to
hardware, not even to the hardware Sun sells.  In this case source
would solve (1) and (2) because you'd be (2) free to redistribute
whatever you had a month ago, and Ross would (1) have a fighting
chance of forward-porting the driver he got from you.  

This isn't the case for existing Sun disk drivers that I know about
like the X4500 SATA chip or the LSI Logic mpt RAID card in SPARC SATA
systems, while Linux and I think BSD have free software drivers for
both chips---at best the Sun drivers are (2) redistributable, and I'm
not even clear on that because it's surprisingly tricky to determine.

 r they simply don't support these for end users, it's for large
 r OEM's only. [...]  found the e-mail address of Micro Memory's
 r lead software developer,

who, unlike the salespeople, will probably understand the obvious
difference between providing ``support,'' and taking the basic
responsibility to either archive all downloadables that aren't
redistributable, or make them redistributable if they don't want to
track them any more, but who probably won't be in a position to help
you any more than Neil is.

If their contractor did give you the drivers, would you avoid
mentioning it here for fear a bunch of other people would ask you for
copies, putting you in the same awkward spot?  Would you justify the
reticence by thinking you were hiding the drivers from us out of
loyalty and ``gratitude'' to the contractor who wrote them?  It
stinks, and I recognize the smell.  We've been here before.  I ought
to have better things to do with my life than pirating software to
support obscure proprietary abandonware (but apparently not better
than writing emails whining about the situation).


pgpduxAm78jYK.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] confusion and frustration with zpool

2008-07-09 Thread Miles Nordin
 ah == Al Hopper [EMAIL PROTECTED] writes:

ah I've had bad experiences with the Seagate products.

I've had bad experiences with all of them.  
(maxtor, hgst, seagate, wd)

ah My guess is that it's related to duty cycle -

Recently I've been getting a lot of drives from companies like newegg
and zipzoomfly that fail within the first month.  The rate is high
enough that I would not trust a two-way mirror with 1mo old drives.

Then I have drives with a few undreadable sectors 2 - 5 years into
their life, from all manufacturers.  I test them with 'smartctl -t
long', and either send them for warranty repair or abandon them.  I
suspect usually 'dd if=/dev/zero of=drive' would fix such a disk
unless the ``reallocated sector count'' is too high, but I just
pretend every drive is on lease for its warranty period.  The
PATA/SATA/SATA2NCQ interfaces and capacity-per-watt changes about that
often anyway.

I send so many drives back for repair that it only makes financial
sense to buy 5-year-warranty drives.  I don't think they can make any
money on me with the rate I send them, but if more people did this
maybe they would learn to make disks that don't suck.  Maybe they are
giving me all their marginal ones or something, by using ``sales
channels''---we pour our shit down THIS channel.  In that case they
could still make money.


pgpsUTthi2URl.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Using zfs boot with MPxIO on T2000

2008-07-09 Thread Adrian Danielson
Here is what I have configured:

   T2000 with OBP 4.28.6 2008/05/23 12:07 with 2 - 72 GB disks as the root 
disks
   OpenSolaris Nevada Build 91
  Solaris Express Community Edition snv_91 SPARC
   Copyright 2008 Sun Microsystems, Inc.  All Rights Reserved.
Use is subject to license terms.
 Assembled 03 June 2008

   Installed from DVD as ZFS boot filesystems
   1 SAN disk attached from IBM SVC using DS8300 storage.
   2 - 1 port Qlogic cards attached to McData 6064 Directors


Here's my questions:

1.  After the install I created a zfs mirror of the root disk c0t0d0 to c0t1d0, 
format shows the mirrored disk with sectors instead of cylinders, is this 
normal or correct?  Is there a way to reverse this back to cylinders if it is 
not?  Same goes for the external disk pool using SAN disk from the IBM SVC.

2.  After enabled MPxIO (stmsboot -e), the 2 root disks now have MPxIO labels, 
is this a bug with ZFS boot using MPxIO?  I have MPxIO running on Solaris 10 
release 4 with none of this behavior (I have 2 T2000's, 1 with SVM root disks 
and other with Veritas Encapsulated root disks, all external or non root 
filesystems are managed by Veritas volume management, not ZFS).

  From format:
   0. c4t5000C5000AF82EDBd0 SEAGATE-ST973402SSUN72G-0603-68.37GB
  /scsi_vhci/[EMAIL PROTECTED]
   1. c4t5000C5000AF834ABd0 SUN72G cyl 14087 alt 2 hd 24 sec 424
  /scsi_vhci/[EMAIL PROTECTED]
   2. c4t60050768019081653617d0 IBM-2145--36.00GB
  /scsi_vhci/[EMAIL PROTECTED]


   root[:/root]# stmsboot -L 
   non-STMS device nameSTMS device name
   --
   /dev/rdsk/c0t0d0/dev/rdsk/c4t5000C5000AF834ABd0
   /dev/rdsk/c0t1d0/dev/rdsk/c4t5000C5000AF82EDBd0


3.  Any good references for using ZFS with MPxIO?

Thanks in advance,
Adrian
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] confusion and frustration with zpool

2008-07-09 Thread Keith Bierman

On Jul 9, 2008, at 11:12 AM, Miles Nordin wrote:

 ah == Al Hopper [EMAIL PROTECTED] writes:

 ah I've had bad experiences with the Seagate products.

 I've had bad experiences with all of them.
 (maxtor, hgst, seagate, wd)

 ah My guess is that it's related to duty cycle -

 Recently I've been getting a lot of drives from companies like newegg
 and zipzoomfly that fail within the first month.  The rate is high
 enough that I would not trust a two-way mirror with 1mo old drives.


While I've always had good luck with zipzoomfly, infant mortality  
is a well known feature of many devices. Your advice to do some burn  
in testing of drives before putting them into full production is  
probably a very sound one for sites large enough to maintain a bit of  
inventory ;


-- 
Keith H. Bierman   [EMAIL PROTECTED]  | AIM kbiermank
5430 Nassau Circle East  |
Cherry Hills Village, CO 80113   | 303-997-2749
speaking for myself* Copyright 2008




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs-discuss Digest, Vol 33, Issue 19

2008-07-09 Thread Ross
I think the problem Miles is that this isn't Sun hardware, and I completely 
understand that as a Sun employee, Neil really can't be seen to distribute 
something that's untested and unsupported, and quite possibly under NDA.

On the other hand, if I get hold of these drivers, I'm under no such obligation 
and I'll be happily making them available for everybody who wants them.  I 
already know of two other people who are keen to get these and I'm sure there 
are others.

These cards are starting to show up on the second hand market now, finding a 
set of Solaris drivers would be a welcome bonus for a good few people.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs-discuss Digest, Vol 33, Issue 19

2008-07-09 Thread Tim
On Wed, Jul 9, 2008 at 1:14 PM, Ross [EMAIL PROTECTED] wrote:

 I think the problem Miles is that this isn't Sun hardware, and I completely
 understand that as a Sun employee, Neil really can't be seen to distribute
 something that's untested and unsupported, and quite possibly under NDA.

 On the other hand, if I get hold of these drivers, I'm under no such
 obligation and I'll be happily making them available for everybody who wants
 them.  I already know of two other people who are keen to get these and I'm
 sure there are others.

 These cards are starting to show up on the second hand market now, finding
 a set of Solaris drivers would be a welcome bonus for a good few people.


 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Do we have drivers available for ANY OS for these cards currently?  It'd be
nice to at least be able to test if they function properly.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Using zfs boot with MPxIO on T2000

2008-07-09 Thread Peter Tribble
On Wed, Jul 9, 2008 at 6:27 PM, Adrian Danielson
[EMAIL PROTECTED] wrote:
 Here is what I have configured:

   T2000 with OBP 4.28.6 2008/05/23 12:07 with 2 - 72 GB disks as the root 
 disks
   OpenSolaris Nevada Build 91
...
 2.  After enabled MPxIO (stmsboot -e), the 2 root disks now have MPxIO 
 labels, is this a bug with ZFS boot using MPxIO?  I have MPxIO running on 
 Solaris 10 release 4 with none of this behavior (I have 2 T2000's, 1 with SVM 
 root disks and other with Veritas Encapsulated root disks, all external or 
 non root filesystems are managed by Veritas volume management, not ZFS).

Nothing to do with ZFS. Current versions of the mpt driver, used in a
lot of current Sun
systems for the internal drive and for external SAS connectivity,
support mpxio as well.
(Solaris 10 update 4 doesn't have it - it came soon after in a patch.)

You can restrict stmsboot to only enable mpxio on the mpt or fibre
interfaces using
'stmsboot -D mpt' or 'stmsboot -D fp'.

-- 
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] And the answer to why can't ZFS find a plugged in disk is ...

2008-07-09 Thread James Litchfield
 From an email exchange with a HAL developer...

 This comes about because I boot back and forth between Windows
 and Solaris and when on the Windows side I have the drive unplugged.
 On occasion, I forget to plug it back in before returning to Solaris.

 I wonder then, if Solaris should export removable ZFS volumes on 
 shutdown.

 Seems a strange limitation for HAL to not attempt to mount a zfs file
 system. If it's not imported the mount fails and an error can be 
 generated. If
 it's imported then everything just works. What was the reasoning for 
 this?

 There are multiple reasons. Initially, when HAL was introduced in 
 Solaris (PSARC 2005/399), ZFS did not support hotplug very well or at 
 all. Also, HAL's object model only accomodates traditional single 
 device volumes; it needs to be expanded to account for ZFS's volumes 
 than span multiple devices. There are also more operations than just 
 mount/unmount possible, and sometimes necessary, on ZFS datasets, and 
 HAL simply lacks such interfaces. The third problematic area is that 
 now that ZFS itself includes some sort of hotplug magic, there needs 
 to be coordination with HAL-based volume managers. There are also 
 potential difficulties related to different security models between 
 traditionally mounted filesystems and ZFS.

 In other words, there is nothing fundamentally preventing HAL from 
 supporting ZFS, but the amount of new design is enough for a 
 full-blown project.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Using zfs boot with MPxIO on T2000

2008-07-09 Thread Richard Elling
Adrian Danielson wrote:
 Here is what I have configured:

T2000 with OBP 4.28.6 2008/05/23 12:07 with 2 - 72 GB disks as the 
 root disks
OpenSolaris Nevada Build 91
   Solaris Express Community Edition snv_91 SPARC
Copyright 2008 Sun Microsystems, Inc.  All Rights Reserved.
 Use is subject to license terms.
  Assembled 03 June 2008

Installed from DVD as ZFS boot filesystems
1 SAN disk attached from IBM SVC using DS8300 storage.
2 - 1 port Qlogic cards attached to McData 6064 Directors


 Here's my questions:

 1.  After the install I created a zfs mirror of the root disk c0t0d0 to 
 c0t1d0, format shows the mirrored disk with sectors instead of cylinders, is 
 this normal or correct?  Is there a way to reverse this back to cylinders if 
 it is not?  Same goes for the external disk pool using SAN disk from the IBM 
 SVC.
   

Please verify that you following the procedures for mirroring ZFS boot
disks in the ZFS Adminstration Guide

http://www.opensolaris.org/os/community/zfs/docs/zfsadmin.pdf

As always, I also suggest testing prior to production roll-out.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] RFE: ZFS commands zmv and zcp

2008-07-09 Thread Raquel K. Sanborn
I've run across something that would save me days of trouble.

Situation, the contents of one ZFS file system needs to be moved to another ZFS 
file system. The
destination can be the same Zpool, even a brand new ZFS file system. A command 
to move the
data from one ZFS file system to another, WITHOUT COPYING, would be nice. At 
present, the data is
almost 1TB.

Ideally a zmv or zcp program would be nice.

And no, zfs send and zfs receive won't do the same thing. Those would 
require hours, or possibly
days to copy 1TB to the same Zpool. Plus, I would have to make the source R/O 
during the copy.
I can create a new Zpool or send the data to another Zpool that has space, but 
then I end up with a
1TB of un-used space on the original Zpool.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] previously mentioned J4000 released

2008-07-09 Thread Chad Lewis
Here's the announcement for those new Sun JBOD devices mentioned the  
other day.

http://www.sun.com/aboutsun/pr/2008-07/sunflash.20080709.1.xml

ckl

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RFE: ZFS commands zmv and zcp

2008-07-09 Thread Richard Elling
Raquel K. Sanborn wrote:
 I've run across something that would save me days of trouble.

 Situation, the contents of one ZFS file system needs to be moved to another 
 ZFS file system. The
 destination can be the same Zpool, even a brand new ZFS file system. A 
 command to move the
 data from one ZFS file system to another, WITHOUT COPYING, would be nice. At 
 present, the data is
 almost 1TB.

 Ideally a zmv or zcp program would be nice.

 And no, zfs send and zfs receive won't do the same thing. Those would 
 require hours, or possibly
 days to copy 1TB to the same Zpool. Plus, I would have to make the source R/O 
 during the copy.
 I can create a new Zpool or send the data to another Zpool that has space, 
 but then I end up with a
 1TB of un-used space on the original Zpool.
   

Please follow the thread discussed here last December.
http://mail.opensolaris.org/pipermail/zfs-discuss/2007-December/044975.html
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RFE: ZFS commands zmv and zcp

2008-07-09 Thread Bob Friesenhahn
On Wed, 9 Jul 2008, Raquel K. Sanborn wrote:

 Situation, the contents of one ZFS file system needs to be moved to 
 another ZFS file system. The destination can be the same Zpool, even 
 a brand new ZFS file system. A command to move the data from one ZFS 
 file system to another, WITHOUT COPYING, would be nice. At present, 
 the data is almost 1TB.

I agree that this would be quite useful.  Is it possible that 
snapshot + clone + promote could be useful for your current purpose?

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] X4540

2008-07-09 Thread Tim
So, I see Sun finally updated the Thumper, and it appears they're now using
a PCI-E backplane.  Anyone happen to know what the chipset is?  Any chance
we'll see an 8-port PCI-E SATA card finally??

The new Sun Fire X4540 server uses PCI Express IO technology for more than
triple the system IO-to-network bandwidth.

http://www.sun.com/servers/x64/x4540/

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] previously mentioned J4000 released

2008-07-09 Thread Tim
On Wed, Jul 9, 2008 at 2:39 PM, Chad Lewis [EMAIL PROTECTED] wrote:

 Here's the announcement for those new Sun JBOD devices mentioned the
 other day.

 http://www.sun.com/aboutsun/pr/2008-07/sunflash.20080709.1.xml

 ckl

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



So are these *tagged* drives/firmware?  Do we have to buy them direct from
Sun or can we throw anything we want at it?  Does it come pre-loaded with
real drive trays instead of useless blanks?

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] X4540

2008-07-09 Thread Eric Schrock
The X4540 uses on-board LSI SAS controllers (C1068E).

- Eric

On Wed, Jul 09, 2008 at 02:59:26PM -0500, Tim wrote:
 So, I see Sun finally updated the Thumper, and it appears they're now using
 a PCI-E backplane.  Anyone happen to know what the chipset is?  Any chance
 we'll see an 8-port PCI-E SATA card finally??
 
 The new Sun Fire X4540 server uses PCI Express IO technology for more than
 triple the system IO-to-network bandwidth.
 
 http://www.sun.com/servers/x64/x4540/
 
 --Tim

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] X4540

2008-07-09 Thread Tim
On Wed, Jul 9, 2008 at 3:09 PM, Eric Schrock [EMAIL PROTECTED] wrote:

 The X4540 uses on-board LSI SAS controllers (C1068E).

 - Eric

 On Wed, Jul 09, 2008 at 02:59:26PM -0500, Tim wrote:
  So, I see Sun finally updated the Thumper, and it appears they're now
 using
  a PCI-E backplane.  Anyone happen to know what the chipset is?  Any
 chance
  we'll see an 8-port PCI-E SATA card finally??
 
  The new Sun Fire X4540 server uses PCI Express IO technology for more
 than
  triple the system IO-to-network bandwidth.
 
  http://www.sun.com/servers/x64/x4540/
 
  --Tim

  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


 --
 Eric Schrock, Fishworks
 http://blogs.sun.com/eschrock



Perfect.  Which means good ol' supermicro would come through :)  WOHOO!

AOC-USAS-L8i

http://www.supermicro.com/products/accessories/addon/AOC-USAS-L8i.cfm

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs-discuss Digest, Vol 33, Issue 19

2008-07-09 Thread Miles Nordin
 r == Ross  [EMAIL PROTECTED] writes:

 r I think the problem Miles is that this isn't Sun hardware

In this case it's not, but please do not muddle my point: Marvell SATA
and LSI Logic mpt SATARAID and many other (most?) drivers have the
same problem.

Right now there are, AIUI:

 * closed-source non-redistributable drivers (SXCE only)

 * closed-source redistributable drivers (SXCE, Indiana, Nexenta)

 * open-source redistributable drivers (SXCE, Indiana, Nexenta)

The logical fourth category of open-source non-redistributable drivers
doesn't exist---you CAN have a non-redistributable, $0 driver which
includes source code, but it wouldn't meet the open source
specification.

The word ``third-party driver'' is thrown around a lot.  I guess it
was a common word in the pre-Opensolaris days?  The three categories
are orthogonal to bundling or support entitlements, and there are
plenty of Solaris/SXCE-bundled, support-entitled drivers in the first
category.

 r I completely understand that as a Sun employee, Neil really
 r can't be seen to distribute something that's untested and
 r unsupported, and quite possibly under NDA.

AIUI it's not personal, or be-seen-as.  It's, do you have the right to
do it, or do you not.

For example, I do not have the right to give you an SXCE DVD.  You
have to download it yourself.  (hope you don't need an old version!)
I DO have the right to give you a Nexenta or OpenSolaris 2008.05 DVD.
This is redistribution.  To pass redistribution rights on to me, Sun
left drivers out of the OpenSolaris/Indiana release and Nexenta out of
the Nexenta release.

And just as Micro Memory can take a formerly-$0 driver down from their
web page, Sun can take down the SXCE b12345 .iso, and if you don't
already have a copy hoarded you're not technically allowed to have
your friend copy his DVD and give it to you.

 r if I get hold of these drivers, I'm under no such obligation

The obligation would come when you get the drivers---you'll be given
drivers on the condition you agree to something.  Since you don't have
them yet, you're in a bad position to promise this.

You could promise, ``I won't accept drivers from anyone who makes me
promise not to redistribute them or not to release the source code of
them,'' (or publish benchmarks without the manufacturers approval
COUGH COUGH) which is what I _wish_ Sun would do to the chip and card
vendors from which they get components in the hardware they sell, but
they don't.

You could also promise, ``If someone makes me agree not to
redistribute this, I'll agree and then break the agreement, because I
care more about preserving the community than I do about respecting
legal agreements.''

To me it seems like technical people take exclusively the former
approach, and casual non-technical users almost exclusively the
latter.  I guess there are a lot of people in the world who can
repeatedly make the latter statement publicly without hurting their
careers, but maybe not many such people on this list.

anyway sorry it's OT.  I'll drop it now.  I should hunt for a
[EMAIL PROTECTED] list or something.


pgpzb16u6QFN2.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] X4540

2008-07-09 Thread Mike Gerdts
On Wed, Jul 9, 2008 at 2:59 PM, Tim [EMAIL PROTECTED] wrote:
 So, I see Sun finally updated the Thumper, and it appears they're now using
 a PCI-E backplane.  Anyone happen to know what the chipset is?  Any chance
 we'll see an 8-port PCI-E SATA card finally??

 The new Sun Fire X4540 server uses PCI Express IO technology for more than
 triple the system IO-to-network bandwidth.

 http://www.sun.com/servers/x64/x4540/

Any word on why PCI-Express was not extended to the expansion slots?
I put PCI-Express cards in every other server that I connect to 10
gigabit Ethernet or the SAN (FC tape drives).

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] previously mentioned J4000 released

2008-07-09 Thread Tim
On Wed, Jul 9, 2008 at 2:39 PM, Chad Lewis [EMAIL PROTECTED] wrote:

 Here's the announcement for those new Sun JBOD devices mentioned the
 other day.

 http://www.sun.com/aboutsun/pr/2008-07/sunflash.20080709.1.xml

 ckl

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



Very interesting, I have two questions:

Does this require tagged drives?  IE: do we *HAVE* to purchase all drives
that go into these direct from Sun?
Does it ship with real drive trays in the *empty* slots, or those worthless
blanks that won't hold a drive?

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] X4540

2008-07-09 Thread Richard Elling
Tim wrote:
 So, I see Sun finally updated the Thumper, and it appears they're now 
 using a PCI-E backplane.  Anyone happen to know what the chipset is?  
 Any chance we'll see an 8-port PCI-E SATA card finally??

One NVidia MCP-55 and two NVidia IO-55s replace the thumper's
AMD-8132 HT to PCI-X bridges.  The new configuration is such
that the expandable PCI-E slots have their own IO-55.  The
MCP-55 and one IO-55 connect to 3 LSI 1068E and provide
2x GbE each. This should be a better balance than the thumper's
configuration.

LSI 1068E SAS/SATA controllers replace thumper's Marvell
SAS/SATA controllers.  You might recognize the LSI 1068,
and its smaller cousin, the 1064, as being used in many other
Sun servers from the T1000 to the M9000.

8-port PCI-E SAS/SATA card is supported for additional
expansion, such as a J4500 (the JBOD-only version)
http://www.sun.com/storagetek/storage_networking/hba/sas/specs.xml

The best news, for many folks, is that you can boot from an
(externally pluggable) CF card, so that you don't have to burn
two disks for the OS.

I think we have solved many of the deficiencies noted in the
thumper, including more CPU and memory capacity.  Please
let us know what you think :-)
 -- richard



 The new Sun Fire X4540 server uses PCI Express IO technology for more 
 than triple the system IO-to-network bandwidth.

 http://www.sun.com/servers/x64/x4540/

 --Tim

 

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] X4540

2008-07-09 Thread Eric Schrock
On Wed, Jul 09, 2008 at 03:19:53PM -0500, Mike Gerdts wrote:
 
 Any word on why PCI-Express was not extended to the expansion slots?
 I put PCI-Express cards in every other server that I connect to 10
 gigabit Ethernet or the SAN (FC tape drives).
 

The webpage is incorrect.  There are three 8x PCI-E half-height slots on
the X4540.

- Eric

--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RFE: ZFS commands zmv and zcp

2008-07-09 Thread Raquel K. Sanborn
Thanks, glad someone else thought of it first.

I guess I will have to do things the hard way.

Raquel
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] X4540

2008-07-09 Thread Tim
Might also want to have them talk to byteandswitch.
*
**We went to the next-generation Intel processors [and] we have used the
latest generation of our Solaris ZFS software, he explains, adding that the
J4000 JBODs can also be connected to the X4540.*

Either the 4540 is using XEON's now, someone was misquoted, or someone was
confused :)

http://www.byteandswitch.com/document.asp?doc_id=158533WT.svl=news1_1

--Tim


On Wed, Jul 9, 2008 at 3:44 PM, Richard Elling [EMAIL PROTECTED]
wrote:

 Yes, thanks for catching this.  I'm sure it is just a copy-n-paste
 mistake.  I've alerted product manager to get it fixed.
 -- richard


 Mike Gerdts wrote:

 On Wed, Jul 9, 2008 at 3:29 PM, Richard Elling [EMAIL PROTECTED]
 wrote:


 8-port PCI-E SAS/SATA card is supported for additional
 expansion, such as a J4500 (the JBOD-only version)
 http://www.sun.com/storagetek/storage_networking/hba/sas/specs.xml



 Based upon my previous message, this message, and Jeorg Moellenkamp's
 blog entry[1], I think that the hardware specifications page[2] needs
 to be updated so that the expansion slots say PCI-Express rather than
 PCI-X.

 1.
 http://www.c0t0d0s0.org/archives/4605-New-storage-from-Sun-J420044004500-and-X4540-Storage-Server.html
 2. http://www.sun.com/servers/x64/x4540/specs.xml





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] X4540

2008-07-09 Thread Eric Schrock
On Wed, Jul 09, 2008 at 03:52:27PM -0500, Tim wrote:
 
 Is the 4540 still running a rageXL?  I find that somewhat humorous if it's
 an Nvidia chipset with ATI video :)
 

According to SMBIOS there is an on-board device of type AST2000 VGA.

- Eric

--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] X4540

2008-07-09 Thread Richard Elling
Tim wrote:

 Is the 4540 still running a rageXL?  I find that somewhat humorous if 
 it's an Nvidia chipset with ATI video :)

Yes, it is part of the chip which handles the management interface.
I don't find this to be a contradiction, though.  AMD bought ATI
and we're using AMD Quad-core CPUs.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Using zfs boot with MPxIO on T2000

2008-07-09 Thread Chris Ridd
Adrian Danielson wrote:
 1.  After the install I created a zfs mirror of the root disk c0t0d0 to 
 c0t1d0, format shows the mirrored disk with sectors instead of cylinders, is 
 this normal or correct?  Is there a way to reverse this back to cylinders if 
 it is not?  Same goes for the external disk pool using SAN disk from the IBM 
 SVC.

Format show sectors when the disk has an EFI label, and cylinders when 
the disk has a Sun label. ZFS always uses EFI labels, so you're seeing 
the right thing.

You can change the label (blowing away the disk contents of course) 
using format -e. The label menu changes with the -e flag to let you 
choose the kind of label.

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Using zfs boot with MPxIO on T2000

2008-07-09 Thread Cindy . Swearingen

ZFS uses EFI when a storage pool is created with whole disks.
ZFS uses the old-style VTOC label when a storage pool is created
with slices.

To be able to boot from a ZFS root pool, the storage pool must be
created with slices. This is a new requirement in ZFS land, and is
described in the doc pointer Richard provided previously:

http://www.opensolaris.org/os/community/zfs/docs/zfsadmin.pdf

Cindy

Chris Ridd wrote:
 Adrian Danielson wrote:
 
1.  After the install I created a zfs mirror of the root disk c0t0d0 to 
c0t1d0, format shows the mirrored disk with sectors instead of cylinders, is 
this normal or correct?  Is there a way to reverse this back to cylinders if 
it is not?  Same goes for the external disk pool using SAN disk from the IBM 
SVC.
 
 
 Format show sectors when the disk has an EFI label, and cylinders when 
 the disk has a Sun label. ZFS always uses EFI labels, so you're seeing 
 the right thing.
 
 You can change the label (blowing away the disk contents of course) 
 using format -e. The label menu changes with the -e flag to let you 
 choose the kind of label.
 
 Cheers,
 
 Chris
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Using zfs boot with MPxIO on T2000

2008-07-09 Thread James C. McPherson
Peter Tribble wrote:
 On Wed, Jul 9, 2008 at 6:27 PM, Adrian Danielson
 [EMAIL PROTECTED] wrote:
 Here is what I have configured:

   T2000 with OBP 4.28.6 2008/05/23 12:07 with 2 - 72 GB disks as the 
 root disks
   OpenSolaris Nevada Build 91
 ...
 2.  After enabled MPxIO (stmsboot -e), the 2 root disks now have MPxIO 
 labels, is this a bug with ZFS boot using MPxIO?  I have MPxIO running on 
 Solaris 10 release 4 with none of this behavior (I have 2 T2000's, 1 with 
 SVM root disks and other with Veritas Encapsulated root disks, all external 
 or non root filesystems are managed by Veritas volume management, not ZFS).
 
 Nothing to do with ZFS. Current versions of the mpt driver, used in a
 lot of current Sun
 systems for the internal drive and for external SAS connectivity,
 support mpxio as well.
 (Solaris 10 update 4 doesn't have it - it came soon after in a patch.)
 
 You can restrict stmsboot to only enable mpxio on the mpt or fibre
 interfaces using
 'stmsboot -D mpt' or 'stmsboot -D fp'.


Hi Adrian,
as Peter mentions, this isn't a bug, it's a feature ;) Actually,
it's the feature that I delivered into Solaris 10 last year with
the 125081-10/125082-10 patches.


James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] X4540

2008-07-09 Thread Richard Elling
Eric Schrock wrote:
 On Wed, Jul 09, 2008 at 03:52:27PM -0500, Tim wrote:
   
 Is the 4540 still running a rageXL?  I find that somewhat humorous if it's
 an Nvidia chipset with ATI video :)

 

 According to SMBIOS there is an on-board device of type AST2000 VGA.
   

Yes, I think I found another copy-n-paste error in some docs :-(
It does appear to be an AST2000, something like:
http://www.aspeedtech.com/ast2000.html

 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] X4540

2008-07-09 Thread Al Hopper
On Wed, Jul 9, 2008 at 3:29 PM, Richard Elling [EMAIL PROTECTED] wrote:
 Tim wrote:
 So, I see Sun finally updated the Thumper, and it appears they're now
 using a PCI-E backplane.  Anyone happen to know what the chipset is?
 Any chance we'll see an 8-port PCI-E SATA card finally??

 One NVidia MCP-55 and two NVidia IO-55s replace the thumper's
 AMD-8132 HT to PCI-X bridges.  The new configuration is such
 that the expandable PCI-E slots have their own IO-55.  The
 MCP-55 and one IO-55 connect to 3 LSI 1068E and provide
 2x GbE each. This should be a better balance than the thumper's
 configuration.

 LSI 1068E SAS/SATA controllers replace thumper's Marvell
 SAS/SATA controllers.  You might recognize the LSI 1068,
 and its smaller cousin, the 1064, as being used in many other
 Sun servers from the T1000 to the M9000.

 8-port PCI-E SAS/SATA card is supported for additional
 expansion, such as a J4500 (the JBOD-only version)
 http://www.sun.com/storagetek/storage_networking/hba/sas/specs.xml

 The best news, for many folks, is that you can boot from an
 (externally pluggable) CF card, so that you don't have to burn
 two disks for the OS.

 I think we have solved many of the deficiencies noted in the
 thumper, including more CPU and memory capacity.  Please
 let us know what you think :-)

Not that I'm in the market for one - but I think a version with
(possibly fewer) 15k RPM SAS disks would be a best seller - especially
for applications that require more IOPS.   Like RDBMS for example.
And yes, I realize that one could install a SAS card into the 4540 and
attach it to one of the SAS based J4nnn  boxes - but that's not the
same physical density that a 4540 with SAS disks would offer.  Or even
a mixture of SATA and SAS drives

And it would be great if Sun would OEM the Micro Memory (aka vmetro)
cards.  Obviously its only a question of time before Sun will bring
its own RAM/flash cards to the market - but an OEM deal would make
product available now and probably won't compete with what Sun has in
mind (based entirely on my own crystal ball gazing).  We all know how
big a win this is for NFS shares!

Congrats to Sun, Team ZFS and open storage.  The new x45xx and J4xxx
boxes are *great* additions to Suns product line.


  -- richard



 The new Sun Fire X4540 server uses PCI Express IO technology for more
 than triple the system IO-to-network bandwidth.

 http://www.sun.com/servers/x64/x4540/

 --Tim

Regards,

-- 
Al Hopper Logical Approach Inc,Plano,TX [EMAIL PROTECTED]
 Voice: 972.379.2133 Timezone: US CDT
OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Case study/recommended ZFS setup for home file server

2008-07-09 Thread Florin Iucha
Hello,

I plan to use (Open)Solaris for a home file server.  I wanted cool and
quiet hardware, so I picked a mini-atx motherboard and case, an AMD64
CPU and 4 GB of RAM.  My case has room for three hard drives and I
have chosen 3x WD 750 Green Power hard drives.  The file server will
serve out via NFS and Samba the home directories, the library
(collected articles and books in PDF format) and the photo archive
(150GB and growing of photos in RAW format ~ 7-9MB/file).

I cannot use OpenSolaris 2008.05 since it does not recognize the SATA
disks attached to the southbridge. A fix for this problem went into
build 93.  I will use SXCE 93 (for the SATA fix) or SXCE 94 (for the
last revision of the ZFS format).

In order to make the maximum amount of space available for the photos,
I plan to use RAID-5 for that pool.  Also, I would like to have
sufficient redundancy so if a drive goes bad, I can just replace it
and the volume manger/file system will take care of fixing itself
back.

The question is, how should I partition the drives, and what tuning
parameters should I use for the pools and file systems?  From reading
the best practices guides [1], [2], it seems that I cannot have the
root file system on a RAID-5 pool, but it has to be a separate storage
pool.  This seems to be slightly at odds with the suggestion of using
whole-disks for ZFS, not just slices/partitions.

My plan right now is to create a 20 GB and a 720 GB slice on each
disk, then create two storage pools, one RAID-1 (20 GB) and one RAID-5
(1.440 TB).  Create the root, var, usr and opt file systems in the
first pool, and home, library and photos in the second.  I hope I
won't need swap, but I could create three 1 GB slices (one on each
disk) for that.

Does this sound like a good configuration?

Will the SXCE 9[34] installer allow me to create the above setup?

Should I pass any special parameters to the zfs pool and file system
creation tool to get the best performance?  home and library contains
files between few KB and a fer MB.  photos contains file roughly 7 to
9 MB.  Should I place those on separate pools?

Note: the hardware is committed (i.e. I already have it), so I am not
inclined to deviate from it 8^)

Thanks,
florin

1: http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
2: http://www.solarisinternals.com/wiki/index.php/ZFS_Configuration_Guide

-- 
Bruce Schneier expects the Spanish Inquisition.
  http://geekz.co.uk/schneierfacts/fact/163


pgp9qLioSeY7W.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] slog device

2008-07-09 Thread Al Hopper
On Tue, Jul 8, 2008 at 10:42 PM, Gilberto Mautner
[EMAIL PROTECTED] wrote:
 Hi,

 Anyway, are there other devices out there that you would recommend to use as
 a slog device, other than this nvram card, that would present similar
 performance gains?

Not that this will get you similar performance gains - but don't
overlook putting a couple of small 15k RPM SAS disk drives in the box.
 They work great with ZFS and really help out those poor SATA drives
when ZFS starts beating up on them.And it also helps if you can't
put more RAM in the box.

Conduct your own experiments with 15k SAS drives as slog/cache
devices.  Worst case scenario, you'll simply end up using them as ZFS
vdevs.  :)

Regards,

-- 
Al Hopper Logical Approach Inc,Plano,TX [EMAIL PROTECTED]
 Voice: 972.379.2133 Timezone: US CDT
OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] confusion and frustration with zpool

2008-07-09 Thread Anton B. Rang
Also worth noting is that the enterprise-class drives have protection from 
heavy load that the consumer-class drives don't. In particular, there's no 
temperature sensor on the voice coil for the consumer drives, which means that 
under heavy seek load (constant i/o), the drive will eventually overheat. 
[There are plenty of other differences, but this one is important if you plan 
to put a drive into 24/7 use.]
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Supermicro AOC-USAS-L8i

2008-07-09 Thread Brandon High
On Wed, Jul 9, 2008 at 1:12 PM, Tim [EMAIL PROTECTED] wrote:
 Perfect.  Which means good ol' supermicro would come through :)  WOHOO!

 AOC-USAS-L8i

 http://www.supermicro.com/products/accessories/addon/AOC-USAS-L8i.cfm

Is this card new? I'm not finding it at the usual places like Newegg, etc.

It looks like the LSI SAS3081E-R, but probably at 1/2 the cost.

-B

-- 
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Supermicro AOC-USAS-L8i

2008-07-09 Thread James McPherson
On Thu, Jul 10, 2008 at 10:34 AM, Brandon High [EMAIL PROTECTED] wrote:
 On Wed, Jul 9, 2008 at 1:12 PM, Tim [EMAIL PROTECTED] wrote:
 Perfect.  Which means good ol' supermicro would come through :)  WOHOO!

 AOC-USAS-L8i

 http://www.supermicro.com/products/accessories/addon/AOC-USAS-L8i.cfm

 Is this card new? I'm not finding it at the usual places like Newegg, etc.

 It looks like the LSI SAS3081E-R, but probably at 1/2 the cost.


It appears to be the non--RAID version of the card. (That's what the R
suffix indicates). If it is that is the case, then I've got one running quite
happily in my workstation already, using the mpt driver.


James C. McPherson
--
Solaris kernel software engineer, system admin and troubleshooter
 http://www.jmcp.homeunix.com/blog
 http://blogs.sun.com/jmcp
Find me on LinkedIn @ http://www.linkedin.com/in/jamescmcpherson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Supermicro AOC-USAS-L8i

2008-07-09 Thread Tim
Dunno how old it is, but James is right, no Raid which is why it's cheaper.
Also why I like it ;)



On Wed, Jul 9, 2008 at 7:34 PM, Brandon High [EMAIL PROTECTED] wrote:

 On Wed, Jul 9, 2008 at 1:12 PM, Tim [EMAIL PROTECTED] wrote:
  Perfect.  Which means good ol' supermicro would come through :)  WOHOO!
 
  AOC-USAS-L8i
 
  http://www.supermicro.com/products/accessories/addon/AOC-USAS-L8i.cfm

 Is this card new? I'm not finding it at the usual places like Newegg, etc.

 It looks like the LSI SAS3081E-R, but probably at 1/2 the cost.

 -B

 --
 Brandon High [EMAIL PROTECTED]
 The good is the enemy of the best. - Nietzsche

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Case study/recommended ZFS setup for home file server

2008-07-09 Thread Brandon High
On Wed, Jul 9, 2008 at 3:37 PM, Florin Iucha [EMAIL PROTECTED] wrote:
 The question is, how should I partition the drives, and what tuning
 parameters should I use for the pools and file systems?  From reading
 the best practices guides [1], [2], it seems that I cannot have the
 root file system on a RAID-5 pool, but it has to be a separate storage
 pool.  This seems to be slightly at odds with the suggestion of using
 whole-disks for ZFS, not just slices/partitions.

The reason for using a whole disk is that ZFS will turn on the drive's
cache. When using slices, the cache is normally disabled. If all
slices are using ZFS, you can turn the drive cache back on. I don't
think it happens by default right now, but you can set it manually.

Another alternative is to use an IDE to Compact Flash adapter, and
boot off of flash. I'll be building a media server once we move, and
that system will boot from flash. You can also boot from USB keys, but
USB under OpenSolaris seems to be iffy.

Here's the component list that I'm planning to use right now:
http://secure.newegg.com/WishList/PublicWishDetail.aspx?Source=MSWDWishListNumber=7739092

I *may* change it and boot off another drive that is not part of the
RAID-Z pool.

 My plan right now is to create a 20 GB and a 720 GB slice on each
 disk, then create two storage pools, one RAID-1 (20 GB) and one RAID-5
 (1.440 TB).  Create the root, var, usr and opt file systems in the
 first pool, and home, library and photos in the second.  I hope I
 won't need swap, but I could create three 1 GB slices (one on each
 disk) for that.

 Does this sound like a good configuration?

If you have enough memory (say 4gb) you probably won't need swap. I
believe swap can live in a ZFS pool now too, so you won't necesarily
need another slice. You'll just have RAID-Z protected swap.

I built a Linux-based NAS a few years back using an almost identical
scheme and wound up regretting it. In the future I would install the
system on a completely separate disk or group of disks than the shared
pool.

 Should I pass any special parameters to the zfs pool and file system
 creation tool to get the best performance?  home and library contains
 files between few KB and a fer MB.  photos contains file roughly 7 to
 9 MB.  Should I place those on separate pools?

You shouldn't need to do anything. If you want to set the block size,
or enable or disable compression, etc. you can create multiple
filesystems in your pool rather than multiple pools.

 Note: the hardware is committed (i.e. I already have it), so I am not
 inclined to deviate from it 8^)

You might want to look at a 4 or 8 port SATA adapter rather than wait
for the southbridge fixes.

-B

-- 
Brandon High [EMAIL PROTECTED]
The good is the enemy of the best. - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Supermicro AOC-USAS-L8i

2008-07-09 Thread Brian Hechinger
On Wed, Jul 09, 2008 at 07:53:30PM -0500, Tim wrote:
 Dunno how old it is, but James is right, no Raid which is why it's cheaper.
 Also why I like it ;)

I have the HP badged LSA SAS3080X in my Ultra80, it's a fantastic card.
If I ever get a box with PCI-E (I'm looking to upgrade the U80 soon, so
it might just happen) that card looks like it would be *perfect*.


 On Wed, Jul 9, 2008 at 7:34 PM, Brandon High [EMAIL PROTECTED] wrote:
 
  On Wed, Jul 9, 2008 at 1:12 PM, Tim [EMAIL PROTECTED] wrote:
   Perfect.  Which means good ol' supermicro would come through :)  WOHOO!
  
   AOC-USAS-L8i
  
   http://www.supermicro.com/products/accessories/addon/AOC-USAS-L8i.cfm
 
  Is this card new? I'm not finding it at the usual places like Newegg, etc.
 
  It looks like the LSI SAS3081E-R, but probably at 1/2 the cost.
 
  -B
 
  --
  Brandon High [EMAIL PROTECTED]
  The good is the enemy of the best. - Nietzsche
 

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Case study/recommended ZFS setup for home file

2008-07-09 Thread Bohdan Tashchuk
Thank you for starting this thread. I hope you get some good feedback.

Your questions are quite frequently asked in this forum, but I'm very
interested in the topic. Anway, the best answer varies from month
to month. So I hope you get some good feedback.

 I cannot use OpenSolaris 2008.05 since it does not
 recognize the SATA disks attached to the southbridge.
 A fix for this problem went into build 93.

Which forum/mailing list discusses SATA issues like the above?

Thanks
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Case study/recommended ZFS setup for home file

2008-07-09 Thread Florin Iucha
On Wed, Jul 09, 2008 at 08:42:37PM -0700, Bohdan Tashchuk wrote:
  I cannot use OpenSolaris 2008.05 since it does not
  recognize the SATA disks attached to the southbridge.
  A fix for this problem went into build 93.
 
 Which forum/mailing list discusses SATA issues like the above?

#opensolaris in freenode.net

I booted from the OpenSolaris LiveCD/installer, and noticing the lack
of available disks, I cried for help on #irc.  There were a few
helpful people that gave me some commands to run, to try and get this
going.  After their efforts failed, I googled for solaris and SB600
(this is the ATI SouthBridge chip) and found a forum posting from
another user, back in February, and the hit in bugzilla, pointing to
the resolution of the bug, with the target being snv_93.

Cheers,
florin

-- 
Bruce Schneier expects the Spanish Inquisition.
  http://geekz.co.uk/schneierfacts/fact/163


pgpBfvJz8y2GS.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Case study/recommended ZFS setup for home file server

2008-07-09 Thread Florin Iucha
On Wed, Jul 09, 2008 at 06:02:24PM -0700, Brandon High wrote:
 On Wed, Jul 9, 2008 at 3:37 PM, Florin Iucha [EMAIL PROTECTED] wrote:
 The reason for using a whole disk is that ZFS will turn on the drive's
 cache. When using slices, the cache is normally disabled. If all
 slices are using ZFS, you can turn the drive cache back on. I don't
 think it happens by default right now, but you can set it manually.

Aha! Good to know.

 Another alternative is to use an IDE to Compact Flash adapter, and
 boot off of flash. I'll be building a media server once we move, and
 that system will boot from flash. You can also boot from USB keys, but
 USB under OpenSolaris seems to be iffy.
 
 Here's the component list that I'm planning to use right now:
 http://secure.newegg.com/WishList/PublicWishDetail.aspx?Source=MSWDWishListNumber=7739092

That adapter won't work for me, since I have a single IDE port, and I
need to use the DVD to install the OS and maybe to run some backups.

However, this looks interesting:

   http://www.addonics.com/products/flash_memory_reader/ad2sahdcf.asp

as it has hardware mirroring.  Not sure what the error reporting
through the OS is though..., but I hope I don't have to find out.

For the Compact Flash I would spring for the industrial grade:

   
http://www.hitechvendors.com/showproduct.aspx?ProductID=4885SEName=transcend-4gb-100x-industrial-cf-card-udma4-mode

  My plan right now is to create a 20 GB and a 720 GB slice on each
  disk, then create two storage pools, one RAID-1 (20 GB) and one RAID-5
  (1.440 TB).  Create the root, var, usr and opt file systems in the
  first pool, and home, library and photos in the second.  I hope I
  won't need swap, but I could create three 1 GB slices (one on each
  disk) for that.
 
 I built a Linux-based NAS a few years back using an almost identical
 scheme and wound up regretting it. In the future I would install the
 system on a completely separate disk or group of disks than the shared
 pool.

This is the current Linux-based NAS and I'm not happy with its
performance, either.

  Note: the hardware is committed (i.e. I already have it), so I am not
  inclined to deviate from it 8^)
 
 You might want to look at a 4 or 8 port SATA adapter rather than wait
 for the southbridge fixes.

I like the southbridge since it sits on the PCI express bus.  The PCI bus
is limited to 133 MB/s, which divided by 3 (disks) means 35-40 MB/s
(including overhead) writes.  And good quality PCI-express add-on
controllers with Solaris drivers are quite expensive.

Cheers,
florin

-- 
Bruce Schneier expects the Spanish Inquisition.
  http://geekz.co.uk/schneierfacts/fact/163


pgpWxnTi94UBz.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Using zfs boot with MPxIO on T2000

2008-07-09 Thread Adrian Danielson
Thanks everyone for your replies, I have a better understanding of how to test 
out ZFS with MPxIO.

Best Regards,
Adrian
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Case study/recommended ZFS setup for home file

2008-07-09 Thread Richard Elling
Florin Iucha wrote:
 On Wed, Jul 09, 2008 at 08:42:37PM -0700, Bohdan Tashchuk wrote:
   
 I cannot use OpenSolaris 2008.05 since it does not
 recognize the SATA disks attached to the southbridge.
 A fix for this problem went into build 93.
   
 Which forum/mailing list discusses SATA issues like the above?
 

 #opensolaris in freenode.net

 I booted from the OpenSolaris LiveCD/installer, and noticing the lack
 of available disks, I cried for help on #irc.  There were a few
 helpful people that gave me some commands to run, to try and get this
 going.  After their efforts failed, I googled for solaris and SB600
 (this is the ATI SouthBridge chip) and found a forum posting from
 another user, back in February, and the hit in bugzilla, pointing to
 the resolution of the bug, with the target being snv_93.
   

The OpenSolaris live CD has a hardware device detection tool.
Please run it and submit the results (everyone should do this :-)

b93 should be out soon, in the next week or so.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Using zfs boot with MPxIO on T2000

2008-07-09 Thread Adrian Danielson
http://www.opensolaris.org/os/community/zfs/docs/zfsadmin.pdf

Another question:

When trying to mirror the 2nd root disk, I get the following error:

root[tst01:/root]# zpool attach rpool c0t0d0s0 c0t1d0s0
cannot attach c0t1d0s0 to c0t0d0s0: device is too small

Since c0t1d0s0 has been reduced by 1 cylinder it's too small, is there a way to 
reduce the existing rpool so it will fit?  I did not see in the zfsadmin.pdf 
guide if there was a way to do this or a work around.  If I use the -f it 
will work but create an EFI labeled disk as I understand does not boot using 
ZFS.  I must be overlooking a step.

Thanks again,
Adrian
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Using zfs boot with MPxIO on T2000

2008-07-09 Thread James C. McPherson
Adrian Danielson wrote:
 http://www.opensolaris.org/os/community/zfs/docs/zfsadmin.pdf
 
 Another question:
 
 When trying to mirror the 2nd root disk, I get the following error:
 
 root[tst01:/root]# zpool attach rpool c0t0d0s0 c0t1d0s0 cannot attach
 c0t1d0s0 to c0t0d0s0: device is too small
 
 Since c0t1d0s0 has been reduced by 1 cylinder it's too small, is there a
 way to reduce the existing rpool so it will fit?  I did not see in the
 zfsadmin.pdf guide if there was a way to do this or a work around.  If I
 use the -f it will work but create an EFI labeled disk as I understand
 does not boot using ZFS.  I must be overlooking a step.


Sorry, you're out of luck - at least for the moment. Can you
create the rpool to be smaller?


James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss