Re: [zfs-discuss] Expected throughput

2010-07-04 Thread Roy Sigurd Karlsbakk
To summarise, putting 28 disks in a single vdev is nothing you would do if you want performance. You'll end up with as many IOPS a single drive can do. Split it up into smaller (10 disk) vdevs and try again. If you need high performance, put them in a striped mirror (aka RAID1+0) A

Re: [zfs-discuss] NexentaStor 3.0.3 vs OpenSolaris - Patches more up to date?

2010-07-04 Thread Fajar A. Nugraha
On Sun, Jul 4, 2010 at 12:22 AM, Garrett D'Amore garr...@nexenta.com wrote: I am sorry you feel that way.  I will look at your issue as soon as I am able, but I should say that it is almost certain that whatever the problem is, it probably is inherited from OpenSolaris and the build of NCP

Re: [zfs-discuss] zpool import hangs indefinitely (retry post in parts; too long?)

2010-07-04 Thread Roy Sigurd Karlsbakk
- Original Message - Victor, The zpool import succeeded on the next attempt following the crash that I reported to you by private e-mail! For completeness, this is the final status of the pool: pool: tank state: ONLINE scan: resilvered 1.50K in 165h28m with 0 errors on Sat

Re: [zfs-discuss] NexentaStor 3.0.3 vs OpenSolaris - Patches more up to date?

2010-07-04 Thread Garrett D'Amore
Compared to b134? Yes! We have fixed many bugs that still exist in 134. Fajar A. Nugraha fa...@fajar.net wrote: On Sun, Jul 4, 2010 at 12:22 AM, Garrett D'Amore garr...@nexenta.com wrote: I am sorry you feel that way.  I will look at your issue as soon as I am able, but I should say that it

Re: [zfs-discuss] NexentaStor 3.0.3 vs OpenSolaris - Patches more up to date?

2010-07-04 Thread Roy Sigurd Karlsbakk
- Original Message - Compared to b134? Yes! We have fixed many bugs that still exist in 134. Where can I find a list of these? Vennlige hilsener / Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 r...@karlsbakk.net http://blogg.karlsbakk.net/ -- I all pedagogikk er det

Re: [zfs-discuss] Expected throughput

2010-07-04 Thread Ian D
Ok... so we've rebuilt the pool as 14 pairs of mirrors, each pair having one disk in each of the two JBODs. Now we're getting about 500-1000 IOPS (according to zpool iostat) and 20-30MB/sec in random read on a big database. Does that sounds right?Seems right, as Erik said. Btw, do you use

Re: [zfs-discuss] zpool import hangs indefinitely (retry post in parts; too long?)

2010-07-04 Thread Andrew Jones
- Original Message - Victor, The zpool import succeeded on the next attempt following the crash that I reported to you by private e-mail! For completeness, this is the final status of the pool: pool: tank state: ONLINE scan: resilvered 1.50K in 165h28m with 0

Re: [zfs-discuss] Expected throughput

2010-07-04 Thread Bob Friesenhahn
Ok... so we've rebuilt the pool as 14 pairs of mirrors, each pair having one disk in each of the two JBODs.  Now we're getting about 500-1000 IOPS (according to zpool iostat) and 20-30MB/sec in random read on a big database.  Does that sounds right? I am not sure who wrote the above text

Re: [zfs-discuss] Expected throughput

2010-07-04 Thread Mike Gerdts
On Sun, Jul 4, 2010 at 11:28 AM, Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote: Ok... so we've rebuilt the pool as 14 pairs of mirrors, each pair having one disk in each of the two JBODs.  Now we're getting about 500-1000 IOPS (according to zpool iostat) and 20-30MB/sec in random read on

Re: [zfs-discuss] Expected throughput

2010-07-04 Thread Mike Gerdts
On Sun, Jul 4, 2010 at 10:08 AM, Ian D rewar...@hotmail.com wrote: What I don't understand is why, when I run a single query I get 100 IOPS and 3MB/sec.  The setup can obviously do better, so where is the bottleneck?  I don't see any CPU core on any side being maxed out so it can't be it...

Re: [zfs-discuss] Expected throughput

2010-07-04 Thread Ian D
In what way is CPU contention being monitored? prstat without options is nearly useless for a multithreaded app on a multi-CPU (or multi-core/multi-thread) system. mpstat is only useful if threads never migrate between CPU's. prstat -mL gives a nice picture of how busy each LWP (thread)

Re: [zfs-discuss] Expected throughput

2010-07-04 Thread Richard Elling
On Jul 4, 2010, at 8:08 AM, Ian D wrote: Ok... so we've rebuilt the pool as 14 pairs of mirrors, each pair having one disk in each of the two JBODs. Now we're getting about 500-1000 IOPS (according to zpool iostat) and 20-30MB/sec in random read on a big database. Does that sounds right?

Re: [zfs-discuss] Expected throughput

2010-07-04 Thread Mike Gerdts
On Sun, Jul 4, 2010 at 2:08 PM, Ian D rewar...@hotmail.com wrote: Mem:  74098512k total, 73910728k used,   187784k free,    96948k buffers Swap:  2104488k total,      208k used,  2104280k free, 63210472k cached   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND 17652 mysql  

Re: [zfs-discuss] Expected throughput

2010-07-04 Thread Ian D
Is that 38% of one CPU or 38% of all CPU's? How many CPU's does the Linux box have? I don't mean the number of sockets, I mean number of sockets * number of cores * number of threads per core. My The server has two Intel X5570s, they are quad core and have hyperthreading. It would say

Re: [zfs-discuss] NexentaStor 3.0.3 vs OpenSolaris - Patches more up to date?

2010-07-04 Thread Bohdan Tashchuk
Where can I find a list of these? This leads to the more generic question of: where are *any* release notes? I saw on Genunix that Community Edition 3.0.3 was replaced by 3.0.3-1. What changed? I went to nexenta.org and looked around. But it wasn't immediately obvious where to find release