Re: [zfs-discuss] Format returning bogus controller info

2011-02-27 Thread James C. McPherson
On 27/02/11 05:24 PM, Dave Pooser wrote: On 2/26/11 7:43 PM, Bill Sommerfeldsommerf...@hamachi.org wrote: On your system, c12 is the mpxio virtual controller; any disk which is potentially multipath-able (and that includes the SAS drives) will appear as a child of the virtual controller

Re: [zfs-discuss] ZFS Performance

2011-02-27 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of David Blasingame Oracle Keep pool space under 80% utilization to maintain pool performance. For what it's worth, the same is true for any other filesystem too. What really matters is the

Re: [zfs-discuss] External SATA drive enclosures + ZFS?

2011-02-27 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Brandon High I would avoid USB, since it can be less reliable than other connection methods. That's the impression I get from older posts made by Sun Take that a step further. Anything

Re: [zfs-discuss] Format returning bogus controller info

2011-02-27 Thread Dave Pooser
On 2/27/11 5:15 AM, James C. McPherson j...@opensolaris.org wrote: On 27/02/11 05:24 PM, Dave Pooser wrote: On 2/26/11 7:43 PM, Bill Sommerfeldsommerf...@hamachi.org wrote: On your system, c12 is the mpxio virtual controller; any disk which is potentially multipath-able (and that includes the

Re: [zfs-discuss] Format returning bogus controller info

2011-02-27 Thread Roy Sigurd Karlsbakk
I can live with that-- but I really want to know what (real, not virtual) controllers disks are connected to; I want to build 3 8-disk RAIDz2 vdevs now (with room for a fourth for expansion later) and I really want to make sure each of those vdevs has fewer than three disks per controller so

Re: [zfs-discuss] ZFS Performance

2011-02-27 Thread Roy Sigurd Karlsbakk
In reading the ZFS Best practices, I'm curious if this statement is still true about 80% utilization. It is, and in my experience, it doesn't matter much if you have a full pool and add another VDEV, the existing VDEVs will be full still, and performance will be slow. For this reason, new

Re: [zfs-discuss] Format returning bogus controller info

2011-02-27 Thread Dave Pooser
On 2/27/11 11:18 AM, Roy Sigurd Karlsbakk r...@karlsbakk.net wrote: I cannot but agree. On Linux and Windoze (haven't tested FreeBSD), drives connected to an LSI9211 show up in the correct order, but not on OI/osol/S11ex (IIRC), and fmtopo doesn't always show a mapping between device name and

Re: [zfs-discuss] Format returning bogus controller info

2011-02-27 Thread James C. McPherson
On 28/02/11 07:30 AM, Dave Pooser wrote: On 2/27/11 11:18 AM, Roy Sigurd Karlsbakkr...@karlsbakk.net wrote: I cannot but agree. On Linux and Windoze (haven't tested FreeBSD), drives connected to an LSI9211 show up in the correct order, but not on OI/osol/S11ex (IIRC), and fmtopo doesn't

Re: [zfs-discuss] Format returning bogus controller info

2011-02-27 Thread James C. McPherson
On 28/02/11 03:18 AM, Roy Sigurd Karlsbakk wrote: I can live with that-- but I really want to know what (real, not virtual) controllers disks are connected to; I want to build 3 8-disk RAIDz2 vdevs now (with room for a fourth for expansion later) and I really want to make sure each of those

Re: [zfs-discuss] Format returning bogus controller info

2011-02-27 Thread James C. McPherson
On 28/02/11 02:08 AM, Dave Pooser wrote: On 2/27/11 5:15 AM, James C. McPhersonj...@opensolaris.org wrote: On 27/02/11 05:24 PM, Dave Pooser wrote: On 2/26/11 7:43 PM, Bill Sommerfeldsommerf...@hamachi.org wrote: On your system, c12 is the mpxio virtual controller; any disk which is

Re: [zfs-discuss] External SATA drive enclosures + ZFS?

2011-02-27 Thread Rich Teer
On Fri, 25 Feb 2011, Brandon High wrote: You might want to consider eSATA. Port multipliers are supported in recent builds (128+ I think), and will give better performance than USB. I'm not sure if PMP are supported on Sparc though., since it requires support in both the controller and PMP.

Re: [zfs-discuss] Format returning bogus controller info

2011-02-27 Thread Dave Pooser
On 2/27/11 4:07 PM, James C. McPherson j...@opensolaris.org wrote: I misread your initial email, sorry. No worries-- I probably could have written it more clearly. So your disks are connected to separate PHYs on the HBA, by virtue of their cabling. You can see this for yourself by looking at

Re: [zfs-discuss] External SATA drive enclosures + ZFS?

2011-02-27 Thread Brandon High
On Sun, Feb 27, 2011 at 4:15 PM, Rich Teer rich.t...@rite-group.com wrote: So the question is, what eSATA non-RAID HBA do people recommend?  Bear in mind that I'm looking for something with driver support out of the box with either the latest Solaris 10, or Solaris 11 Express. The SiI3124 (PCI

Re: [zfs-discuss] External SATA drive enclosures + ZFS?

2011-02-27 Thread taemun
On 28 February 2011 02:06, Edward Ned Harvey opensolarisisdeadlongliveopensola...@nedharvey.com wrote: Take that a step further. Anything external is unreliable. I have used USB, eSATA, and Firewire external devices. They all work. The only question is for how long. eSATA has no need

Re: [zfs-discuss] External SATA drive enclosures + ZFS?

2011-02-27 Thread Jerry Kemp
I can tell you specifically that the 3124 will not work in Sparc equipment. I specifically purchased a 3124 after seeing glowing reviews in the archives. I needed it for a low end project using a V120 or Netra T1. What I didn't pick up from reviewing the archives was all of the glowing reviews

Re: [zfs-discuss] External SATA drive enclosures + ZFS?

2011-02-27 Thread Brandon High
On Sun, Feb 27, 2011 at 7:48 AM, taemun tae...@gmail.com wrote: eSATA has no need for any interposer chips between a modern SATA chipset on the motherboard and a SATA hard drive. You can buy cables with appropriate eSATA has different electrical specifications, namely higher minimum transmit

Re: [zfs-discuss] External SATA drive enclosures + ZFS?

2011-02-27 Thread Krunal Desai
On Feb 27, 2011, at 10:48 , taemun wrote: eSATA has no need for any interposer chips between a modern SATA chipset on the motherboard and a SATA hard drive. You can buy cables with appropriate ends for this. There is no reason why the data side of an eSATA drive should be any more likely

Re: [zfs-discuss] ZFS Performance

2011-02-27 Thread Brandon High
On Sun, Feb 27, 2011 at 6:59 AM, Edward Ned Harvey opensolarisisdeadlongliveopensola...@nedharvey.com wrote: But there is one specific thing, isn't there?  Where ZFS will choose to use a different algorithm for something, when pool usage exceeds some threshold. Right?  What is that? It moves

Re: [zfs-discuss] Format returning bogus controller info

2011-02-27 Thread James C. McPherson
On 28/02/11 12:46 PM, Dave Pooser wrote: On 2/27/11 4:07 PM, James C. McPhersonj...@opensolaris.org wrote: ... PHY iport@ 01 12 24 38 410 520 640 780 OK, bear with me for a moment because I'm feeling extra dense this evening. The PHY tells me which port on

Re: [zfs-discuss] Format returning bogus controller info

2011-02-27 Thread Dave Pooser
On 2/27/11 10:06 PM, James C. McPherson j...@opensolaris.org wrote: I've arranged these by devinfo path: 1st controller c10t2d0 /pci@0,0/pci8086,340a@3/pci1000,72@0/iport@4/disk@p2,0 c15t5000CCA222E006B6d0 /pci@0,0/pci8086,340a@3/pci1000,72@0/iport@8/disk@w5000cca222e006b6,0

Re: [zfs-discuss] Format returning bogus controller info

2011-02-27 Thread James C. McPherson
On 28/02/11 02:51 PM, Dave Pooser wrote: On 2/27/11 10:06 PM, James C. McPhersonj...@opensolaris.org wrote: ... 2nd controller c16t5000CCA222DDD7BAd0 /pci@0,0/pci8086,340c@5/pci1000,3020@0/iport@2/disk@w5000cca222ddd7ba,0 3rd controller c14t5000CCA222DF8FBEd0

Re: [zfs-discuss] Format returning bogus controller info

2011-02-27 Thread Richard Elling
On Feb 27, 2011, at 9:18 AM, Roy Sigurd Karlsbakk wrote: I can live with that-- but I really want to know what (real, not virtual) controllers disks are connected to; I want to build 3 8-disk RAIDz2 vdevs now (with room for a fourth for expansion later) and I really want to make sure each of

Re: [zfs-discuss] ZFS Performance

2011-02-27 Thread Toby Thain
On 27/02/11 9:59 AM, Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of David Blasingame Oracle Keep pool space under 80% utilization to maintain pool performance. For what it's worth, the same is true for any other

Re: [zfs-discuss] ZFS Performance

2011-02-27 Thread Eric D. Mudama
On Mon, Feb 28 at 0:30, Toby Thain wrote: I would expect COW puts more pressure on near-full behaviour compared to write-in-place filesystems. If that's not true, somebody correct me. Off the top of my head, I think it'd depend on the workload. Write-in-place will always be faster with large