On 27/02/11 05:24 PM, Dave Pooser wrote:
On 2/26/11 7:43 PM, Bill Sommerfeldsommerf...@hamachi.org wrote:
On your system, c12 is the mpxio virtual controller; any disk which is
potentially multipath-able (and that includes the SAS drives) will
appear as a child of the virtual controller
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of David Blasingame Oracle
Keep pool space under 80% utilization to maintain pool performance.
For what it's worth, the same is true for any other filesystem too. What
really matters is the
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Brandon High
I would avoid USB, since it can be less reliable than other connection
methods. That's the impression I get from older posts made by Sun
Take that a step further. Anything
On 2/27/11 5:15 AM, James C. McPherson j...@opensolaris.org wrote:
On 27/02/11 05:24 PM, Dave Pooser wrote:
On 2/26/11 7:43 PM, Bill Sommerfeldsommerf...@hamachi.org wrote:
On your system, c12 is the mpxio virtual controller; any disk which is
potentially multipath-able (and that includes the
I can live with that-- but I really want to know what (real, not virtual)
controllers disks are connected to; I want to build 3 8-disk RAIDz2 vdevs
now (with room for a fourth for expansion later) and I really want to make
sure each of those vdevs has fewer than three disks per controller so
In reading the ZFS Best practices, I'm curious if this statement is
still true about 80% utilization.
It is, and in my experience, it doesn't matter much if you have a full pool and
add another VDEV, the existing VDEVs will be full still, and performance will
be slow. For this reason, new
On 2/27/11 11:18 AM, Roy Sigurd Karlsbakk r...@karlsbakk.net wrote:
I cannot but agree. On Linux and Windoze (haven't tested FreeBSD), drives
connected to an LSI9211 show up in the correct order, but not on
OI/osol/S11ex (IIRC), and fmtopo doesn't always show a mapping between
device name and
On 28/02/11 07:30 AM, Dave Pooser wrote:
On 2/27/11 11:18 AM, Roy Sigurd Karlsbakkr...@karlsbakk.net wrote:
I cannot but agree. On Linux and Windoze (haven't tested FreeBSD), drives
connected to an LSI9211 show up in the correct order, but not on
OI/osol/S11ex (IIRC), and fmtopo doesn't
On 28/02/11 03:18 AM, Roy Sigurd Karlsbakk wrote:
I can live with that-- but I really want to know what (real, not
virtual) controllers disks are connected to; I want to build 3 8-disk
RAIDz2 vdevs now (with room for a fourth for expansion later) and I
really want to make sure each of those
On 28/02/11 02:08 AM, Dave Pooser wrote:
On 2/27/11 5:15 AM, James C. McPhersonj...@opensolaris.org wrote:
On 27/02/11 05:24 PM, Dave Pooser wrote:
On 2/26/11 7:43 PM, Bill Sommerfeldsommerf...@hamachi.org wrote:
On your system, c12 is the mpxio virtual controller; any disk which is
On Fri, 25 Feb 2011, Brandon High wrote:
You might want to consider eSATA. Port multipliers are supported in
recent builds (128+ I think), and will give better performance than
USB. I'm not sure if PMP are supported on Sparc though., since it
requires support in both the controller and PMP.
On 2/27/11 4:07 PM, James C. McPherson j...@opensolaris.org wrote:
I misread your initial email, sorry.
No worries-- I probably could have written it more clearly.
So your disks are connected to separate PHYs on the HBA, by virtue
of their cabling. You can see this for yourself by looking at
On Sun, Feb 27, 2011 at 4:15 PM, Rich Teer rich.t...@rite-group.com wrote:
So the question is, what eSATA non-RAID HBA do people recommend? Bear
in mind that I'm looking for something with driver support out of the
box with either the latest Solaris 10, or Solaris 11 Express.
The SiI3124 (PCI
On 28 February 2011 02:06, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
Take that a step further. Anything external is unreliable. I have used
USB, eSATA, and Firewire external devices. They all work. The only
question is for how long.
eSATA has no need
I can tell you specifically that the 3124 will not work in Sparc
equipment. I specifically purchased a 3124 after seeing glowing reviews
in the archives. I needed it for a low end project using a V120 or
Netra T1. What I didn't pick up from reviewing the archives was all of
the glowing reviews
On Sun, Feb 27, 2011 at 7:48 AM, taemun tae...@gmail.com wrote:
eSATA has no need for any interposer chips between a modern SATA chipset on
the motherboard and a SATA hard drive. You can buy cables with appropriate
eSATA has different electrical specifications, namely higher minimum
transmit
On Feb 27, 2011, at 10:48 , taemun wrote:
eSATA has no need for any interposer chips between a modern SATA chipset on
the motherboard and a SATA hard drive. You can buy cables with appropriate
ends for this. There is no reason why the data side of an eSATA drive should
be any more likely
On Sun, Feb 27, 2011 at 6:59 AM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
But there is one specific thing, isn't there? Where ZFS will choose to use
a different algorithm for something, when pool usage exceeds some threshold.
Right? What is that?
It moves
On 28/02/11 12:46 PM, Dave Pooser wrote:
On 2/27/11 4:07 PM, James C. McPhersonj...@opensolaris.org wrote:
...
PHY iport@
01
12
24
38
410
520
640
780
OK, bear with me for a moment because I'm feeling extra dense this evening.
The PHY tells me which port on
On 2/27/11 10:06 PM, James C. McPherson j...@opensolaris.org wrote:
I've arranged these by devinfo path:
1st controller
c10t2d0
/pci@0,0/pci8086,340a@3/pci1000,72@0/iport@4/disk@p2,0
c15t5000CCA222E006B6d0
/pci@0,0/pci8086,340a@3/pci1000,72@0/iport@8/disk@w5000cca222e006b6,0
On 28/02/11 02:51 PM, Dave Pooser wrote:
On 2/27/11 10:06 PM, James C. McPhersonj...@opensolaris.org wrote:
...
2nd controller
c16t5000CCA222DDD7BAd0
/pci@0,0/pci8086,340c@5/pci1000,3020@0/iport@2/disk@w5000cca222ddd7ba,0
3rd controller
c14t5000CCA222DF8FBEd0
On Feb 27, 2011, at 9:18 AM, Roy Sigurd Karlsbakk wrote:
I can live with that-- but I really want to know what (real, not virtual)
controllers disks are connected to; I want to build 3 8-disk RAIDz2 vdevs
now (with room for a fourth for expansion later) and I really want to make
sure each of
On 27/02/11 9:59 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of David Blasingame Oracle
Keep pool space under 80% utilization to maintain pool performance.
For what it's worth, the same is true for any other
On Mon, Feb 28 at 0:30, Toby Thain wrote:
I would expect COW puts more pressure on near-full behaviour compared to
write-in-place filesystems. If that's not true, somebody correct me.
Off the top of my head, I think it'd depend on the workload.
Write-in-place will always be faster with large
24 matches
Mail list logo