On Jun 13, 2007, at 11:43 AM, Markus Schiltknecht wrote:
In the mean time, I've figured out that the box in question peaked
at about 1450 NOTPMs with 120 warehouses with RAID 1+0. I'll try to
compare again to RAID 6.
Is there any place where such results are collected?
There is the ill-use
On 6/13/07, Markus Schiltknecht <[EMAIL PROTECTED]> wrote:
Hi,
Mark Wong wrote:
> Yeah, I ran with 500+ warehouses, but I had 6 14-disk arrays of 15K
> RPM scsi drives and 6 dual-channel controllers... :)
Lucky you!
In the mean time, I've figured out that the box in question peaked at
about 14
Hi,
Mark Wong wrote:
Yeah, I ran with 500+ warehouses, but I had 6 14-disk arrays of 15K
RPM scsi drives and 6 dual-channel controllers... :)
Lucky you!
In the mean time, I've figured out that the box in question peaked at
about 1450 NOTPMs with 120 warehouses with RAID 1+0. I'll try to comp
On 6/11/07, Markus Schiltknecht <[EMAIL PROTECTED]> wrote:
Heikki Linnakangas wrote:
> Markus Schiltknecht wrote:
>> For dbt2, I've used 500 warehouses and 90 concurrent connections,
>> default values for everything else.
>
> 500? That's just too much for the hardware. Start from say 70 warehouse
Heikki Linnakangas wrote:
Markus Schiltknecht wrote:
For dbt2, I've used 500 warehouses and 90 concurrent connections,
default values for everything else.
500? That's just too much for the hardware. Start from say 70 warehouses
and up it from there 10 at a time until you hit the wall. I'm usi
Markus Schiltknecht wrote:
For dbt2, I've used 500 warehouses and 90 concurrent connections,
default values for everything else.
500? That's just too much for the hardware. Start from say 70 warehouses
and up it from there 10 at a time until you hit the wall. I'm using 30
connections with ~10
Hi,
Jim Nasby wrote:
I don't think that kind of testing is useful for good raid controllers
on RAID5/6, because the controller will just be streaming the data out;
it'll compute the parity blocks on the fly and just stream data to the
drives as fast as possible.
That's why I called it 'simpl
On Jun 4, 2007, at 1:56 PM, Markus Schiltknecht wrote:
Simplistic throughput testing with dd:
dd of=test if=/dev/zero bs=10K count=80
80+0 records in
80+0 records out
819200 bytes (8.2 GB) copied, 37.3552 seconds, 219 MB/s
pamonth:/opt/dbt2/bb# dd if=test of=/dev/zero bs=10K coun
On 6/4/07, Markus Schiltknecht <[EMAIL PROTECTED]> wrote:
Thanks, that's exactly the one simple and very raw comparison value I've
been looking for. (Since most of the results pages of (former?) OSDL are
down).
Yeah, those results pages are gone for good. :(
Regards,
Mark
On Tue, 5 Jun 2007, Markus Schiltknecht wrote:
I'm really wondering, if the RAID 6 of the ARECA 1260 hurts so badly
All of your disk performance tests look reasonable; certainly not slow
enough to cause the issue you're seeing. The only thing I've seen in this
thread that makes me slightly
Hi,
Heikki Linnakangas wrote:
Maybe, TPC-C is very write-intensive. I don't know much about RAID
stuff, but I think you'd really benefit from a separate WAL drive. You
could try turning fsync=off to see if that makes a difference.
Hm.. good idea, I'll try that.
Oh, and how many connections a
Markus Schiltknecht wrote:
Hi,
Heikki Linnakangas wrote:
I still suspect there's something wrong with plans, I doubt you can
get that bad performance unless it's doing something really stupid.
Agreed, but I'm still looking for that really stupid thing... AFAICT,
there are really no seqscans
Hi,
Heikki Linnakangas wrote:
I still suspect there's something wrong with plans, I doubt you can get
that bad performance unless it's doing something really stupid.
Agreed, but I'm still looking for that really stupid thing... AFAICT,
there are really no seqscans..., see the pg_stat_user_ta
Markus Schiltknecht wrote:
Hi,
Heikki Linnakangas wrote:
There's clearly something wrong. The response times are ridiculously
high, they should be < 5 seconds (except for stock level transaction)
to pass a TPC-C test. I wonder if you built any indexes at all?
Hm.. according to the output/5/d
Hi,
PFC wrote:
You have a huge amount of iowait !
Yup.
Did you put the xlog on a separate disk ?
No, it's all one big RAID6 for the sake of simplicity (plus I doubt
somewhat, that 2 disks for WAL + 5 for data + 1 spare would be much
faster than 7 disks for WAL and data + 1 spare
I'll run a bonnie++ first. As the CPUs seem to be idle most of the time
(see the vmstat.out below), I'm suspecting the RAID or disks.
You have a huge amount of iowait !
Did you put the xlog on a separate disk ?
What filesystem do you use ?
Did you check that yo
Hi,
Heikki Linnakangas wrote:
There's clearly something wrong. The response times are ridiculously
high, they should be < 5 seconds (except for stock level transaction) to
pass a TPC-C test. I wonder if you built any indexes at all?
Hm.. according to the output/5/db/plan0.out, all queries use
Markus Schiltknecht wrote:
I'm currently playing with dbt2 and am wondering, if the results I'm
getting are reasonable. I'm testing a 2x Dual Core Xeon system with 4 GB
of RAM and 8 SATA HDDs attached via Areca RAID Controller w/ battery
backed write cache. Seven of the eight platters are confi
Hi,
I'm currently playing with dbt2 and am wondering, if the results I'm
getting are reasonable. I'm testing a 2x Dual Core Xeon system with 4 GB
of RAM and 8 SATA HDDs attached via Areca RAID Controller w/ battery
backed write cache. Seven of the eight platters are configured as one
RAID6, o
19 matches
Mail list logo