Re: [zfs-discuss] Expected throughput

2010-07-07 Thread Orvar Korvar
Something like this, maybe http://blogs.sun.com/constantin/entry/x4500_solaris_zfs_iscsi_perfect -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-dis

Re: [zfs-discuss] Expected throughput

2010-07-06 Thread James Van Artsdalen
Under FreeBSD I've seen zpool scrub sustain nearly 500 MB/s in pools with large files (a pool with eight MIRROR vdevs on two Silicon Image 3124 controllers). You need to carefully look for bottlenecks in the hardware. You don't indicate how the disks are attached. I would measure the total ban

Re: [zfs-discuss] Expected throughput

2010-07-05 Thread Richard Elling
On Jul 5, 2010, at 4:19 AM, Ian D wrote: > >Also, are you using jumbo frames? That can usually help a bit with either > >access protocol > > > Yes. It was off early on and we did notice a significant difference once we > switched it on. Turning "naggle" off as suggested by Richard also seem t

Re: [zfs-discuss] Expected throughput

2010-07-05 Thread Roy Sigurd Karlsbakk
> Just a short question - wouldn't it be easier, and perhaps faster, to just > have the MySQL DB on an NFS share? iSCSI adds >complexity, both on the target and the initiator. Yes, we did tried both and we didn't notice any difference in term of performances. I've read conflicting opini

Re: [zfs-discuss] Expected throughput

2010-07-05 Thread Ian D
>Just a short question - wouldn't it be easier, and perhaps faster, to just >have the MySQL DB on an NFS share? iSCSI adds >complexity, both on the target and the initiator. Yes, we did tried both and we didn't notice any difference in term of performances. I've read conflicting opinions on

Re: [zfs-discuss] Expected throughput

2010-07-05 Thread Roy Sigurd Karlsbakk
The database is MySQL, it runs on a Linux box that connects to the Nexenta server through 10GbE using iSCSI. Just a short question - wouldn't it be easier, and perhaps faster, to just have the MySQL DB on an NFS share? iSCSI adds complexity, both on the target and the initiator. Also,

Re: [zfs-discuss] Expected throughput

2010-07-04 Thread Ian D
> Is that 38% of one CPU or 38% of all CPU's? How many CPU's does the > Linux box have? I don't mean the number of sockets, I mean number of > sockets * number of cores * number of threads per core. My The server has two Intel X5570s, they are quad core and have hyperthreading. It would say

Re: [zfs-discuss] Expected throughput

2010-07-04 Thread Mike Gerdts
On Sun, Jul 4, 2010 at 2:08 PM, Ian D wrote: > Mem:  74098512k total, 73910728k used,   187784k free,    96948k buffers > Swap:  2104488k total,      208k used,  2104280k free, 63210472k cached > >   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND > 17652 mysql     20   0 3553m

Re: [zfs-discuss] Expected throughput

2010-07-04 Thread Richard Elling
On Jul 4, 2010, at 8:08 AM, Ian D wrote: > >Ok... so we've rebuilt the pool as 14 pairs of mirrors, each pair having one > >disk in each of the two JBODs. >Now we're getting about 500-1000 IOPS > >(according to zpool iostat) and 20-30MB/sec in random read on a big > >>database. Does that sound

Re: [zfs-discuss] Expected throughput

2010-07-04 Thread Ian D
> In what way is CPU contention being monitored? "prstat" without > options is nearly useless for a multithreaded app on a multi-CPU (or > multi-core/multi-thread) system. mpstat is only useful if threads > never migrate between CPU's. "prstat -mL" gives a nice picture of how > busy each LWP (t

Re: [zfs-discuss] Expected throughput

2010-07-04 Thread Mike Gerdts
On Sun, Jul 4, 2010 at 10:08 AM, Ian D wrote: > What I don't understand is why, when I run a single query I get <100 IOPS > and <3MB/sec.  The setup can obviously do better, so where is the > bottleneck?  I don't see any CPU core on any side being maxed out so it > can't be it... In what way is C

Re: [zfs-discuss] Expected throughput

2010-07-04 Thread Mike Gerdts
On Sun, Jul 4, 2010 at 11:28 AM, Bob Friesenhahn wrote: >> >> Ok... so we've rebuilt the pool as 14 pairs of mirrors, each pair having >> one disk in each of the two JBODs.  Now we're getting about 500-1000 IOPS >> (according to zpool iostat) and 20-30MB/sec in random read on a big >> database.  D

Re: [zfs-discuss] Expected throughput

2010-07-04 Thread Bob Friesenhahn
Ok... so we've rebuilt the pool as 14 pairs of mirrors, each pair having one disk in each of the two JBODs.  Now we're getting about 500-1000 IOPS (according to zpool iostat) and 20-30MB/sec in random read on a big database.  Does that sounds right? I am not sure who wrote the above text sin

Re: [zfs-discuss] Expected throughput

2010-07-04 Thread Ian D
>Ok... so we've rebuilt the pool as 14 pairs of mirrors, each pair having one >disk in each of the two JBODs. >Now we're getting about 500-1000 IOPS >(according to zpool iostat) and 20-30MB/sec in random read on a big >database. > Does that sounds right?>Seems right, as Erik said. Btw, do you

Re: [zfs-discuss] Expected throughput

2010-07-04 Thread Roy Sigurd Karlsbakk
>To summarise, putting 28 disks in a single vdev is nothing you would do if you >want performance. You'll end >up with as many IOPS a single drive can do. >Split it up into smaller (<10 disk) vdevs and try again. If you need >high >performance, put them in a striped mirror (aka RAID1+0)

Re: [zfs-discuss] Expected throughput

2010-07-03 Thread Erik Trimble
On 7/3/2010 2:22 PM, Roy Sigurd Karlsbakk wrote: >To summarise, putting 28 disks in a single vdev is nothing you would do if you want performance. You'll end >up with as many IOPS a single drive c

Re: [zfs-discuss] Expected throughput

2010-07-03 Thread Roy Sigurd Karlsbakk
>To summarise, putting 28 disks in a single vdev is nothing you would do if you >want performance. You'll end >up with as many IOPS a single drive can do. >Split it up into smaller (<10 disk) vdevs and try again. If you need >high >performance, put them in a striped mirror (aka RAID1+0

Re: [zfs-discuss] Expected throughput

2010-07-03 Thread Ian D
>To summarise, putting 28 disks in a single vdev is nothing you would do if you >want performance. You'll end >up with as many IOPS a single drive can do. >Split it up into smaller (<10 disk) vdevs and try again. If you need >high >performance, put them in a striped mirror (aka RAID1+0)>A litt

Re: [zfs-discuss] Expected throughput

2010-07-01 Thread Roy Sigurd Karlsbakk
Hi! We've put 28x 750GB SATA drives in a RAIDZ2 pool (a single vdev) and we get about 80MB/s in sequential read or write. We're running local tests on the server itself (no network involved). Is that what we should be expecting? It seems slow to me. Please read the ZFS best practices

Re: [zfs-discuss] Expected throughput

2010-07-01 Thread Roy Sigurd Karlsbakk
Hi! We've put 28x 750GB SATA drives in a RAIDZ2 pool (a single vdev) and we get about 80MB/s in sequential read or write. We're running local tests on the server itself (no network involved). Is that what we should be expecting? It seems slow to me. Please read the ZFS best practices guide

[zfs-discuss] Expected throughput

2010-07-01 Thread Ian D
Hi! We've put 28x 750GB SATA drives in a RAIDZ2 pool (a single vdev) and we get about 80MB/s in sequential read or write. We're running local tests on the server itself (no network involved). Is that what we should be expecting? It seems slow to me. Thanks