Re: [zfs-discuss] Format returning bogus controller info

2011-02-28 Thread Dave Pooser
On 2/27/11 11:13 PM, James C. McPherson j...@opensolaris.org wrote:

/pci@0,0/pci8086,340c@5/pci1000,3020@0
and
/pci@0,0/pci8086,340e@7/pci1000,3020@0

which are in different slots on your motherboard and connected to
different PCI Express Root Ports - which should help with transfer
rates amongst other things. Have a look at /usr/share/hwdata/pci.ids
for 340[0-9a-f] after the line which starts with 8086.

That's the information I needed; I now have the drives allocated across
multiple controllers for the fault-tolerance I was looking for.

Thanks for all your help-- not only can I fully, unequivocally retract my
failed bit crack, but I just ordered two more of these cards for my next
project!  :^)
--
Dave Pooser, ACSA
Manager of Information Services
Alford Media  http://www.alfordmedia.com http://www.alfordmedia.com/


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Performance

2011-02-28 Thread Torrey McMahon

On 2/25/2011 4:15 PM, Torrey McMahon wrote:

On 2/25/2011 3:49 PM, Tomas Ögren wrote:

On 25 February, 2011 - David Blasingame Oracle sent me these 2,6K bytes:


  Hi All,

  In reading the ZFS Best practices, I'm curious if this statement is
  still true about 80% utilization.

It happens at about 90% for me.. all of a sudden, the mail server got
butt slow.. killed an old snapshot to get to 85% free or so, then it got
snappy again. S10u9 sparc.


Some of the recent updates have pushed the 80% watermark closer to 90% 
for most workloads.


Sorry folks. I was thinking of yet an other change that was in the 
allocation algorithms. 80% is number to stick with.


... now where did I put my cold medicine? :)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Format returning bogus controller info

2011-02-28 Thread Roy Sigurd Karlsbakk
  I cannot but agree. On Linux and Windoze (haven't tested FreeBSD),
  drives connected to an LSI9211 show up in the correct order, but not
  on OI/osol/S11ex (IIRC), and fmtopo doesn't always show a mapping
  between device name and slot, since that relies on the SES hardware
  being properly supported. The answer I've got for this issue is,
  it's not an issue, since it's that way by design etc. This doesn't
  make sense when Linux/Windows show the drives in the correct order.
  IMHO this looks more like a design flaw in the driver code
 
 lsiutil can help. The question is whether the physical labels or
 silkscreen match
 the slot as reported by lsiutil.

Last I checked, it didn't help much. IMHO we need a driver that can display the 
drives in the order they're plugged in. Like Windoze. Like Linux. Like FreeBSD. 
I really don't understand what should be so hard to do it like the others. As 
one said I don't have their sources, both Linux and FreeBSD are OSS software, 
so the source should be ready quite easuly.

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Performance

2011-02-28 Thread Brandon High
On Sun, Feb 27, 2011 at 7:35 PM, Brandon High bh...@freaks.com wrote:
 It moves from best fit to any fit at a certain point, which is at
 ~ 95% (I think). Best fit looks for a large contiguous space to avoid
 fragmentation while any fit looks for any free space.

I got the terminology wrong, it's first-fit when there is space,
moving to best-fit at 96% full.

See 
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/metaslab.c
for details.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Format returning bogus controller info

2011-02-28 Thread James C. McPherson

On  1/03/11 03:00 AM, Dave Pooser wrote:

On 2/27/11 11:13 PM, James C. McPhersonj...@opensolaris.org  wrote:


/pci@0,0/pci8086,340c@5/pci1000,3020@0
and
/pci@0,0/pci8086,340e@7/pci1000,3020@0

which are in different slots on your motherboard and connected to
different PCI Express Root Ports - which should help with transfer
rates amongst other things. Have a look at /usr/share/hwdata/pci.ids
for 340[0-9a-f] after the line which starts with 8086.


That's the information I needed; I now have the drives allocated across
multiple controllers for the fault-tolerance I was looking for.

Thanks for all your help-- not only can I fully, unequivocally retract my
failed bit crack, but I just ordered two more of these cards for my next
project!  :^)


I'm glad I could help.


cheers,
James
--
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Format returning bogus controller info

2011-02-28 Thread James C. McPherson

On  1/03/11 07:02 AM, Roy Sigurd Karlsbakk wrote:
...

Last I checked, it didn't help much. IMHO we need a driver that can

 display the drives in the order they're plugged in. Like Windoze.
 Like Linux. Like FreeBSD. I really don't understand what should be
 so hard to do it like the others. As one said I don't have their
 sources, both Linux and FreeBSD are OSS software, so the source
 should be ready quite easuly.

The difference is in the license. While I could look at what a
BSD-licensed driver does, I do not want to look at a GPL-licensed
driver and create problems for myself or my employer.


James C. McPherson
--
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Format returning bogus controller info

2011-02-28 Thread Dave Pooser
On 2/28/11 4:23 PM, Garrett D'Amore garr...@nexenta.com wrote:

Drives are ordered in the order they are *enumerated* when they *first*
show up in the system.  *Ever*.

Is the same true of controllers? That is, will c12 remain c12 or
/pci@0,0/pci8086,340c@5 remain /pci@0,0/pci8086,340c@5 even if other
controllers are active?
-- 
Dave Pooser, ACSA
Manager of Information Services
Alford Media  http://www.alfordmedia.com




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS send/recv horribly slow on system with 1800+ filesystems

2011-02-28 Thread Moazam Raja
Hi all, I have a test system with a large amount of filesystems which
we take snapshots of and do send/recvs with.

On our test machine, we have 1800+ filesystems and about 5,000
snapshots.The system has 48GB of RAM, and 8 cores (x86). The
filesystem is comprised of 2 regular 1TB in a mirror with a 320GB
FusionIO flash card acting as a ZIL and read cache.

We've noticed that on systems with just a handful of filesystems, ZFS
send (recursive) is quite quick, but on our 1800+ fs box, it's
horribly slow.

For example,

root@testbox:~# zfs send -R chunky/0@async-2011-02-28-15:11:20| pv -i
1  /dev/null

2.51GB 0:04:57 [47.4kB/s] [= ]
^C

The other odd thing I've noticed is that during the 'zfs send' to
/dev/null, zpool iostat shows we're actually *writing* to the zpool at
the rate of 4MB-8MB/s, but reading almost nothing. How can this be the
case?

So I'm left with 2 questions -

1.) Does ZFS get immensely slow once we have thousands of filesystems?

2.) Why do we see 4MB-8MB/s of *writes* to the filesystem when we do a
'zfs send' to /dev/null ?


-Moazam
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss