Re: [PERFORM] Dataset is fetched from cache but still takes same time to fetch records as first run

2017-06-23 Thread Glyn Astill
>From: Tom Lane 
>To: Sumeet Shukla 
>Cc: Dave Stibrany ; pgsql-performance@postgresql.org
>Sent: Friday, 23 June 2017, 5:50
>Subject: Re: [PERFORM] Dataset is fetched from cache but still takes same time 
>to fetch records as first run
> Sumeet Shukla  writes:>
>> Yes, but when I actually execute the query in pgAdmin3, it takes exactly
>> the same time of 19.5 secs.
>
>pgAdmin is well known to be horribly inefficient at displaying large
>query results (and 121788 rows qualifies as "large" for this purpose,
>I believe).  The circa-tenth-of-a-second savings on the server side
>is getting swamped by client-side processing.
>
>It's possible that pgAdmin4 has improved matters in this area.

>

It's also possibly time taken for the results to be tranferred over a network 
if the data is large.

Glyn


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Hi

2017-04-13 Thread Glyn Astill

> From: Daulat Ram 
> To: "pgsql-performance@postgresql.org"  
> Sent: Thursday, 13 April 2017, 7:25
> Subject: [PERFORM] Hi 
>
> Hello, 
>  
> I need to know the criteria behind for settings the work_mem in PostgreSQL, 
> please give the example also if possible.
> 
> Regards,

> Daulat

Is there anything in particular from the manual pages you don't understand? It 
should be quite clear:

https://www.postgresql.org/docs/current/static/runtime-config-resource.html

"Specifies the amount of memory to be used by internal sort operations and hash 
tables before writing to temporary disk files. The value defaults to four 
megabytes (4MB). Note that for a complex query, several sort or hash operations 
might be running in parallel; each operation will be allowed to use as much 
memory as this value specifies before it starts to write data into temporary 
files."

"Also, several running sessions could be doing such operations concurrently. 
Therefore, the total memory used could be many times the value of work_mem; it 
is necessary to keep this fact in mind when choosing the value. Sort operations 
are used for ORDER BY, DISTINCT, and merge joins. Hash tables are used in hash 
joins, hash-based aggregation, and hash-based processing of IN subqueries."

Glyn


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Clarification on using pg_upgrade

2016-06-15 Thread Glyn Astill
- Original Message -

> From: Tory M Blue 
> To: Jim Nasby 
> Cc: "pgsql-performance@postgresql.org" 
> Sent: Tuesday, 14 June 2016, 22:08

> Subject: Re: [PERFORM] Clarification on using pg_upgrade
>
> Right, that's what we do, but then to upgrade, we have to drop/add the
> node, because it's being upgraded.  If I'm updating the underlying OS,
> I have to kill it all. If I'm doing a postgres upgrade, using an old
> version of slon, without using pg_upgrade, I have to drop the db,
> recreate it, which requires a drop/add.
> 
> I'm trying to figure out how to best do it using pg_upgrade  instead
> of the entire drop/add for postgres upgrades (which are needed if you
> are using slon as an upgrade engine for your db).
> 


I've just skimmed through this thread, but I can't quite gather what it is 
you're trying to achieve.  Are you looking to move away from Slony? Upgrade by 
any means with or without Slony?  Or just find a "fast" way of doing a major 
upgrade whilst keeping Slony in-place as your replication method?

If it's the latter, the easiest way is to have 2 or more subscribers subscribed 
to the same sets and one at a time; drop a subscriber node, upgrade and 
re-initdb, then use clone node to recreate it from another subscriber.  If 
you're intent on using pg_upgrade you might be able to fudge it as long as you 
can bump up current txid to be greater than what it was before the upgrade; in 
fact I've done similar before with a slony subscriber, but only as a test on a 
small database.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] slony rpm help slony1-95-2.2.2-1.rhel6.x86_64

2016-06-09 Thread Glyn Astill
> From: avi Singh 
>To: pgsql-performance@postgresql.org 
>Sent: Saturday, 4 June 2016, 0:03
>Subject: [PERFORM] slony rpm help slony1-95-2.2.2-1.rhel6.x86_64
> 
>
>
>Hi All
> Can anyone please point me to location from where i can get slony 
> slony1-95-2.2.2-1.rhel5.x86_64  rpm. I'm upgrading database from version 9.3 
> to 9.5. Current version of rpm we are using is  slony1-93-2.2.2-1.el5.x86_64 
> and the one that is available on postgresql website for 9.5 is 
> slony1-95-2.2.4-4.rhel5.x86_64  which is not compatible and throws an error 
> when i test the upgrade.  In the past i was able to find the 2.2.2-1 version 
> rpm for previous versions on postgres website but not this time for 
> postgresql 9.5
>
>


What you'd be better off doing is installing Slony 2.2.4 on all your servers 
(or better a 2.2.5) rather than trying to get the older version.  If you can't 
get a package you could compile Slony yourself.


The not compatible error you mention is most likely because you've failed to 
update the Slony functions.  See:


http://slony.info/documentation/2.2/stmtupdatefunctions.html

Glyn


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Index scan cost calculation

2015-12-03 Thread Glyn Astill

> From: Jim Nasby <jim.na...@bluetreble.com>
>To: Jeff Janes <jeff.ja...@gmail.com>; Glyn Astill <glynast...@yahoo.co.uk> 
>Cc: Pgsql-performance <pgsql-performance@postgresql.org>
>Sent: Wednesday, 2 December 2015, 22:32
>Subject: Re: [PERFORM] Index scan cost calculation
> 
>
>On 11/30/15 5:03 PM, Jeff Janes wrote:
>> It thinks the combination of (show, type, best, block)  is enough to
>> get down to a single row.  One index adds "flag" to that (which is not
>> useful to the query) and the other adds "row" to that, which is useful
>> but the planner doesn't think it is because once you are down to a
>> single tuple additional selectivity doesn't help.
>
>It occurs to me that maybe you could force this behavior by building an 
>index on a row() instead of on the individual fields. IE:
>
>CREATE INDEX ... ON( row(show, type, best, block, row) )
>
>You would then have to query based on that:
>
>WHERE row(show, type, best, block, row) = row( 'Trans Siberian 
>Orchestra', 'Music', true, 1, 1 )
>
>You mentioned legacy code which presumably you can't modify to do that, 
>but maybe there's a way to trick the planner into it with a view...
>
>CREATE VIEW AS
>SELECT r.show, r.type, r..., etc, etc
>   FROM ( SELECT *, row(show, type, best, block, row) AS r FROM table ) a
>;
>
>When you stick a where clause on that there's a chance it'd get turned 
>into WHERE row() = row()... but now that I see it I'm probably being 
>over optimistic about that. You could probably force the issue with an 
>ON SELECT ON table DO INSTEAD rule, but IIRC those aren't supported.


Thanks, interesting idea, but no cigar.

For the moment just ensuring the seats_index01 is the last index created seems 
to suffice, fragile though it is.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Index scan cost calculation

2015-12-01 Thread Glyn Astill
>
>Clauses that can't be used in an "indexable" way are excluded from the
>index selectivity, but not from the total query selectivity.
>
>> Or is it just likely that the selection of the new index is just by chance?
>
>Bingo.
>


Got it, thanks! Very much appreciated.


Glyn


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Index scan cost calculation

2015-11-30 Thread Glyn Astill


> From: Jeff Janes <jeff.ja...@gmail.com>
> To: Glyn Astill <glynast...@yahoo.co.uk>
> Cc: Pgsql-performance <pgsql-performance@postgresql.org>
> Sent: Saturday, 28 November 2015, 19:25
> Subject: Re: [PERFORM] Index scan cost calculation
> 
> 
> Why does the index seats_index02 exist in the first place?  It looks
> like an index designed for the benefit of a single query.  In which
> case, could flag column be moved up front?  That should prevent it
> from looking falsely enticing.
> 
> A column named "flag" is not usually the type of thing you expect to
> see a range query on, so moving it leftward in the index should not be
> a problem.
> 


Unfortunately it's not possible to move flag left in this scenario.

As you say it's an issue that would not really exist in normal SQL access. The 
main issue is the way it's required for ordering; The index in question is used 
 by a legacy language that accesses records sequentially as if they were direct 
from isam files it used historically via a driver.  In some cases it steps 
through records on a particular show+type until a flag changes and carries on 
unless particular values are seen.


If I create the index show+best+block+row+seat then the planner appears to 
favour that, and all is well.  Despite the startup cost estimate being the 
same, and total cost being 0.01 higher.  This is something I fail to understand 
fully.

Tom stated the index choice is due to a selectivity underestimate.  I think 
this may be because there is actually a correlation between "best"+"block" and 
"type", but from Toms reply my understanding was that total selectivity for the 
query is calculated as the product of the individual selectivities in the where 
clause. Are particular equality clauses ever excluded from the calculation as a 
result of available indexes or otherwise?  Or is it just likely that the 
selection of the new index is just by chance?


Either way I my understanding here is definitely lacking.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Index scan cost calculation

2015-11-26 Thread Glyn Astill
Hi All,

Using pg 9.4.5 I'm looking at a table set up by a 3rd party application and 
trying to figure out why a particular index is being chosen over another for 
updates/deletes.

>From what I can see the reason is that plans using either index have the same 
>exactly the same cost.  So rather I'm asking if there's something glaringly 
>obvious I'm missing, or is there anything I can to to get better estimates.

The table is as follows and has  ~ 50M rows, ~ 4.5GB in size:

CREATE TABLE tickets.seats
(
  recnum serial NOT NULL,
  show numeric(8,0) NOT NULL,
  type numeric(4,0) NOT NULL,
  block character varying(8) NOT NULL,
  "row" numeric(14,0) NOT NULL,
  seat numeric(8,0) NOT NULL,
  flag character varying(15) NOT NULL,
  transno numeric(8,0) NOT NULL,
  best numeric(4,0) NOT NULL,
  "user" character varying(15) NOT NULL,
  "time" numeric(10,0) NOT NULL,
  date date NOT NULL,
  date_reserved timestamp NOT NULL
);

Indexes:
  "seats_index01" PRIMARY KEY, btree (show, type, best, block, "row", seat) 
 // (1094 MB)
  "seats_index00" UNIQUE, btree (recnum)
  // (2423 MB)
  "seats_index02" UNIQUE, btree (show, type, best, block, flag, "row", seat, 
recnum)  // (2908 MB)

default_statistics target is 100, and the following columns are non-default:

attname | attstattarget
+---
show  |  1000
type   |  1000
block|  2000
row|  1000
seat   |  1000
flag   |  1000
best   |  1000

Increasing these further appears to make no noticeable difference. (pg_stats 
here for these columns here: http://pastebin.com/2WQQec7N)

An example query below shows that in some cases the seats_index02 index is 
being chosen:

# analyze verbose seats;
INFO:  analyzing "tickets.seats"
INFO:  "seats": scanned 593409 of 593409 pages, containing 50926456 live rows 
and 349030 dead rows; 60 rows in sample, 50926456 estimated total rows

# begin;
BEGIN
# explain analyze delete from seats where ("show" = 58919 AND "type" = 1 AND 
"best" = 10 AND "block" = 'GMA' AND "row" =26 AND "seat" = 15);
QUERY PLAN
-
Delete on seats  (cost=0.56..4.59 rows=1 width=6) (actual time=0.480..0.480 
rows=0 loops=1)
->  Index Scan using seats_index02 on seats  (cost=0.56..4.59 rows=1 width=6) 
(actual time=0.452..0.453 rows=1 loops=1)
Index Cond: ((show = 58919::numeric) AND (type = 1::numeric) AND (best = 
10::numeric) AND ((block)::text = 'GMA'::text) AND ("row" = 26::numeric) AND 
(seat = 15::numeric))
Planning time: 2.172 ms
Execution time: 0.531 ms
(5 rows)

But from my naive standpoint, seats_index01 is a better candidate:

# abort; begin;
ROLLBACK
BEGIN

# update pg_index set indisvalid = false where indexrelid = 
'seats_index02'::regclass;
# explain analyze delete from seats where ("show" = 58919 AND "type" = 1 AND 
"best" = 10 AND "block" = 'GMA' AND "row" =26 AND "seat" = 15);
QUERY PLAN
-
Delete on seats  (cost=0.56..4.59 rows=1 width=6) (actual time=0.103..0.103 
rows=0 loops=1)
->  Index Scan using seats_index01 on seats  (cost=0.56..4.59 rows=1 width=6) 
(actual time=0.078..0.080 rows=1 loops=1)
Index Cond: ((show = 58919::numeric) AND (type = 1::numeric) AND (best = 
10::numeric) AND ((block)::text = 'GMA'::text) AND ("row" = 26::numeric) AND 
(seat = 15::numeric))
Planning time: 0.535 ms
Execution time: 0.146 ms
(5 rows)


In this instance, the time difference is not huge, however in some seemingly 
random cases where there are a lot of rows with only the "seat" column 
differing the choice of seats_index02 is much larger ~ 70ms vs 0.something ms 
with seats_index01

I suspect some of the seemingly random cases could be where there's been an 
update, followed by a delete since the last analyze, despite auto analyze 
running fairly frequently.

Any suggestions appreciated.

Thanks
Glyn


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Index scan cost calculation

2015-11-26 Thread Glyn Astill
- Original Message -

> From: Glyn Astill <glynast...@yahoo.co.uk>
> To: Pgsql-performance <pgsql-performance@postgresql.org>
> Sent: Thursday, 26 November 2015, 16:11
> Subject: [PERFORM] Index scan cost calculation
> 
> Hi All,
> 
> Using pg 9.4.5 I'm looking at a table set up by a 3rd party application and 
> trying to figure out why a particular index is being chosen over another for 
> updates/deletes.
> 
> From what I can see the reason is that plans using either index have the same 
> exactly the same cost.  So rather I'm asking if there's something 
> glaringly obvious I'm missing, or is there anything I can to to get better 
> estimates.
> 
> The table is as follows and has  ~ 50M rows, ~ 4.5GB in size:
> 
> CREATE TABLE tickets.seats
> (
>   recnum serial NOT NULL,
>   show numeric(8,0) NOT NULL,
>   type numeric(4,0) NOT NULL,
>   block character varying(8) NOT NULL,
>   "row" numeric(14,0) NOT NULL,
>   seat numeric(8,0) NOT NULL,
>   flag character varying(15) NOT NULL,
>   transno numeric(8,0) NOT NULL,
>   best numeric(4,0) NOT NULL,
>   "user" character varying(15) NOT NULL,
>   "time" numeric(10,0) NOT NULL,
>   date date NOT NULL,
>   date_reserved timestamp NOT NULL
> );
> 
> Indexes:
>   "seats_index01" PRIMARY KEY, btree (show, type, best, block, 
> "row", seat)  // (1094 MB)
>   "seats_index00" UNIQUE, btree (recnum)  
> // (2423 MB)
>   "seats_index02" UNIQUE, btree (show, type, best, block, flag, 
> "row", seat, recnum)  // (2908 MB)

> 


^^ If those first two sizes look wrong, it's because they are; they should be 
the other way around.

> default_statistics target is 100, and the following columns are non-default:
> 
> attname | attstattarget
> +---
> show  |  1000
> type   |  1000
> block|  2000
> row|  1000
> seat   |  1000
> flag   |  1000
> best   |  1000
> 
> Increasing these further appears to make no noticeable difference. (pg_stats 
> here for these columns here: http://pastebin.com/2WQQec7N)
> 
> An example query below shows that in some cases the seats_index02 index is 
> being 
> chosen:
> 
> # analyze verbose seats;
> INFO:  analyzing "tickets.seats"
> INFO:  "seats": scanned 593409 of 593409 pages, containing 50926456 
> live rows and 349030 dead rows; 60 rows in sample, 50926456 estimated 
> total 
> rows
> 
> # begin;
> BEGIN
> # explain analyze delete from seats where ("show" = 58919 AND 
> "type" = 1 AND "best" = 10 AND "block" = 
> 'GMA' AND "row" =26 AND "seat" = 15);
> QUERY PLAN
> -
> Delete on seats  (cost=0.56..4.59 rows=1 width=6) (actual time=0.480..0.480 
> rows=0 loops=1)
> ->  Index Scan using seats_index02 on seats  (cost=0.56..4.59 rows=1 width=6) 
> (actual time=0.452..0.453 rows=1 loops=1)
> Index Cond: ((show = 58919::numeric) AND (type = 1::numeric) AND (best = 
> 10::numeric) AND ((block)::text = 'GMA'::text) AND ("row" = 
> 26::numeric) AND (seat = 15::numeric))
> Planning time: 2.172 ms
> Execution time: 0.531 ms
> (5 rows)
> 
> But from my naive standpoint, seats_index01 is a better candidate:
> 
> # abort; begin;
> ROLLBACK
> BEGIN
> 
> # update pg_index set indisvalid = false where indexrelid = 
> 'seats_index02'::regclass;
> # explain analyze delete from seats where ("show" = 58919 AND 
> "type" = 1 AND "best" = 10 AND "block" = 
> 'GMA' AND "row" =26 AND "seat" = 15);
> QUERY PLAN
> -
> Delete on seats  (cost=0.56..4.59 rows=1 width=6) (actual time=0.103..0.103 
> rows=0 loops=1)
> ->  Index Scan using seats_index01 on seats  (cost=0.56..4.59 rows=1 width=6) 
> (actual time=0.078..0.080 rows=1 loops=1)
> Index Cond: ((show = 58919::numeric) AND (type = 1::numeric) AND (best = 
> 10::numeric) AND ((block)::text = 'GMA'::text) AND ("row" = 
> 26::numeric) AND (seat = 15::numeric))
> Planning time: 0.535 ms
> Execution time: 0.146 ms
> (5 rows)
> 
> 
> In this instance, the time difference is not huge, however in some se

Re: [PERFORM] Index scan cost calculation

2015-11-26 Thread Glyn Astill
- Original Message -

> From: Tom Lane <t...@sss.pgh.pa.us>
> To: Glyn Astill <glynast...@yahoo.co.uk>
> Cc: Pgsql-performance <pgsql-performance@postgresql.org>
> Sent: Thursday, 26 November 2015, 16:44
> Subject: Re: [PERFORM] Index scan cost calculation
> 
> Glyn Astill <glynast...@yahoo.co.uk> writes:
>>  Using pg 9.4.5 I'm looking at a table set up by a 3rd party application 
> and trying to figure out why a particular index is being chosen over another 
> for 
> updates/deletes.
>>  From what I can see the reason is that plans using either index have the 
> same exactly the same cost.  So rather I'm asking if there's something 
> glaringly obvious I'm missing, or is there anything I can to to get better 
> estimates.
> 
> I think what's happening is that it's estimating that exactly one index
> tuple needs to be visited in both cases, so that the cost estimates come
> out the same.  That's correct in the one case but overly optimistic in the
> other; the misestimate likely is a consequence of the index columns being
> interdependent.  For instance, if "type" can be predicted from the 
> other
> columns then specifying it isn't really adding anything to the query
> selectivity, but the planner won't know that.  We can conclude from the
> results you've shown that the planner thinks that show+type+best+block
> is sufficient to uniquely determine a table entry, which implies that
> at least some of those columns are strongly correlated with row+seat.
> 
> The problem will probably go away by itself as your table grows, but
> if you don't want to wait, you might want to reflect on which of the index
> columns might be (partially?) functionally dependent on the other columns,
> and whether you could redesign the key structure to avoid that.


Many thanks for the explanation, is such a functional  dependency assumed 
purely based optimistically on statistics gathered by analyze?  My (ignorant) 
thinking was that those sorts of decisions would only be made from keys or 
constraints on the table.


There's no way to determine a particular seat+row combination from 
show+type+best+block or vice versa.

We need show+type+best+block+row+seat to identify an individual row, but 
approximately 90% of the table has just a space " " for the value of "block", 
and zeros for both "best" and "row", and for each of those you could say any 
show+type would almost certainly have row+seat combinations of 0+1, 0+2 and so 
on.


Unfortunately it's an unnormalized legacy structure that I can't really change.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] shared_buffers vs Linux file cache

2015-01-15 Thread Glyn Astill
 From: Huan Ruan huan.ruan...@gmail.com
To: pgsql-performance@postgresql.org 
Sent: Thursday, 15 January 2015, 11:30
Subject: [PERFORM] shared_buffers vs Linux file cache
 


Hi All


I thought 'shared_buffers' sets how much memory that is dedicated to 
PostgreSQL to use for caching data, therefore not available to other 
applications.


However, as shown in the following screenshots, The server (CentOS 6.6 64bit) 
has 64GB of RAM, and 'shared_buffer' is set to 32GB, but the free+buffer+cache 
is 60GB. 


Shouldn't the maximum value for free+buffer+cache be 32GB ( 64 - 32)?
Is 'shared_buffers' pre allocated to Postgres, and Postgres only?



I've not looked at the images, but I think you're getting PostgreSQL 
shared_buffers and the OS buffercache mixed up; they are not the same.

PostgreSQL shared_buffers is specific to postgres, whereas the OS buffercache 
will just use free memory to cache data pages from disk, and this is what 
you're seeing.

Some reading for you: 
http://www.tldp.org/LDP/sag/html/buffer-cache.html

Glyn


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] 9.0 performance degradation with kernel 3.11

2014-11-14 Thread Glyn Astill
 From: Filip Rembiałkowski filip.rembialkow...@gmail.com
To: pgsql-performance@postgresql.org 
Sent: Thursday, 13 November 2014, 8:10
Subject: [PERFORM] 9.0 performance degradation with kernel 3.11
 

Hi

After upgrading our 9.0 database server

from:
openSUSE 11.4, kernel 2.6.37.6-24-default, Pg 9.0.13

to:
openSUSE 13.1, kernel v 3.11.10-21-default, Pg 9.0.15

... and  overall server load is +1 after that.

We did not add any new services/daemons.

It's hard to track down to individual queries - when I tested most
individual query times are same as before the migration.


Any - ANY - hints will be much appreciated.

Thanks
Filip


It's hard to say much going on the little information, but assuming everything 
was rosy for you with your 2.6 version, and you've kept the basics like 
hardware, filesystem, io scheduler etc the same, there are a few kernel 
tunables to tweak on later kernels.

Usually defragmentation of transparent huge pages causes an issue and it's best 
to turn off the defrag option:


echo always  /sys/kernel/mm/transparent_hugepage/enabled 
echo madvise  /sys/kernel/mm/transparent_hugepage/defrag


It's also recommended to increase the value of sched_migration_cost (I think 
now called sched_migration_cost_ns in 3.11+) and disable 
sched_autogroup_enabled.


kernel.sched_migration_cost=500 
kernel.sched_autogroup_enabled=0

Also disable vm.zone_reclaim_mode

vm.zone_reclaim_mode=0


On some of our systems I also saw marked improvements increasing the values of 
kernel.sched_min_granularity_ns and kernel.sched_wakeup_granularity_ns too, on 
some other systems this had no effect.  So you may want to try to see if some 
larger values there help.

A lot of the earlier 3.x kernels aren't great with PostgreSQL, one of the noted 
issues being a stable pages feature that blocks processes modifying pages 
that are currently being written back until the write completes.  I think 
people have noted this gets better in 3.9 onwards, but I personally didn't see 
much of a marked improvement until 3.16.



Thanks
Filip




-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] High CPU usage / load average after upgrading to Ubuntu 12.04

2013-02-28 Thread Glyn Astill
 From: Josh Berkus j...@agliodbs.com
To: Scott Marlowe scott.marl...@gmail.com 
Cc: pgsql-performance@postgresql.org 
Sent: Thursday, 21 February 2013, 3:14
Subject: Re: [PERFORM] High CPU usage / load average after upgrading to Ubuntu 
12.04
 

 Sounds to me like your IO system is stalling on fsyncs or something
 like that.  On machines with plenty of IO cranking up completion
 target usuall smooths things out. 

It certainly seems like it does.  However, I can't demonstrate the issue
using any simpler tool than pgbench ... even running four test_fsyncs in
parallel didn't show any issues, nor do standard FS testing tools.



I've missed a load of this thread and just scanned through what I can see, so 
apologies if I'm repeating anything.

If the suspicion is the IO system and you've tuned everything you can think of; 
is there anything interesting in meminfo/iostat/vmstat before/during the 
stalls? If so can you cause anything similar via bonnie++ with the -b option?


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] hardware advice

2012-10-03 Thread Glyn Astill
- Original Message -

 From: David Boreham david_l...@boreham.org
 To: pgsql-performance@postgresql.org pgsql-performance@postgresql.org
 Cc: 
 Sent: Tuesday, 2 October 2012, 16:14
 Subject: Re: [PERFORM] hardware advice
 
 On 10/2/2012 2:20 AM, Glyn Astill wrote:
  newer R910s recently all of a sudden went dead to the world; no prior 
 symptoms
  showing in our hardware and software monitoring, no errors in the os logs,
  nothing in the dell drac logs.  After a hard reset it's back up as if
  nothing happened, and it's an issue I'm none the wiser to the 
 cause.  Not
  good piece of mind.
 This could be an OS bug rather than a hardware problem.

Yeah actually I'm leaning towards this being a specific bug in the linux 
kernel.  Everything else I said still stands though.



-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] hardware advice

2012-10-02 Thread Glyn Astill


 From: M. D. li...@turnkey.bz
To: pgsql-performance@postgresql.org 
Sent: Friday, 28 September 2012, 18:33
Subject: Re: [PERFORM] hardware advice
 
On 09/28/2012 09:57 AM, David Boreham wrote:
 On 9/28/2012 9:46 AM, Craig James wrote:
 Your best warranty would be to have the confidence to do your own
 repairs, and to have the parts on hand.  I'd seriously consider
 putting your own system together.  Maybe go to a few sites with
 pre-configured machines and see what parts they use.  Order those,
 screw the thing together yourself, and put a spare of each critical
 part on your shelf.
 
 This is what I did for years, but after taking my old parts collection to 
 the landfill a few times, realized I may as well just buy N+1 machines and 
 keep zero spares on the shelf. That way I get a spare machine available for 
 use immediately, and I know the parts are working (parts on the shelf may be 
 defective). If something breaks, I use the spare machine until the 
 replacement parts arrive.
 
 Note in addition that a warranty can be extremely useful in certain 
 organizations as a vehicle of blame avoidance (this may be its primary 
 purpose in fact). If I buy a bunch of machines that turn out to have buggy 
 NICs, well that's my fault and I can kick myself since I own the company, 
 stay up late into the night reading kernel code, and buy new NICs. If I have 
 an evil Dilbertian boss, then well...I'd be seriously thinking about buying 
 Dell boxes in order to blame Dell rather than myself, and be able to say 
 everything is warrantied if badness goes down. Just saying...
 
I'm kinda in the latter shoes.  Dell is the only thing that is trusted in my 
organisation.  If I would build my own, I would be fully blamed for anything 
going wrong in the next 3 years. Thanks everyone for your input.  Now my final 
choice will be if my budget allows for the latest and fastest, else I'm going 
for the x5690.  I don't have hundreds of users, so I think the x5690 should do 
a pretty good job handling the load.



Having
plenty experience with Dell I'd urge you reconsider.  All the Dell servers
we've had have arrived hideously misconfigured, and tech support gets you
nowhere.  Once we've rejigged the hardware ourselves, maybe replacing a
part or two they've performed okay.
 
Reliability has been okay, however one of our
newer R910s recently all of a sudden went dead to the world; no prior symptoms
showing in our hardware and software monitoring, no errors in the os logs,
nothing in the dell drac logs.  After a hard reset it's back up as if
nothing happened, and it's an issue I'm none the wiser to the cause.  Not
good piece of mind.
 
Look around and find another vendor, even if
your company has to pay more for you to have that blame avoidance.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] H800 + md1200 Performance problem

2012-04-17 Thread Glyn Astill
 From: Scott Marlowe scott.marl...@gmail.com
On Mon, Apr 16, 2012 at 8:13 AM, Cesar Martin cmart...@gmail.com wrote:
 Hi,

 Finally the problem was BIOS configuration. DBPM had was set to Active
 Power Controller I changed this to Max
 Performance. http://en.community.dell.com/techcenter/power-cooling/w/wiki/best-practices-in-power-management.aspx
 Now wirite speed are 550MB/s and read 1,1GB/s.

Why in the world would a server be delivered to a customer with such a
setting turned on?  ugh.


Because it's Dell and that's what they do.  


When our R910s arrived, despite them knowing what we were using them for, 
they'd installed the memory to use only one channel per cpu. Burried deep in 
their manual I discovered that they called this power optimised mode and I 
had to buy a whole extra bunch of risers to be able to use all of the channels 
properly.

If it wasn't for proper load testing, and Greg Smiths stream scaling tests I 
don't think I'd even have spotted it.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] H800 + md1200 Performance problem

2012-04-05 Thread Glyn Astill
 From: Tomas Vondra t...@fuzzy.cz

 But the fluctuation, that surely is strange. What are the page cache
 dirty limits, i.e.
 
 cat /proc/sys/vm/dirty_background_ratio
 cat /proc/sys/vm/dirty_ratio
 
 That's probably #1 source I've seen responsible for such issues (on
 machines with a lot of RAM).
 

+1 on that.

We're running similar 32 core dell servers with H700s and 128Gb RAM.

With those at the defaults (I don't recall if it's 5 and 10 respectively) 
you're looking at 3.2Gb of dirty pages before pdflush flushes them and 6.4Gb 
before the process is forced to flush its self.


Re: [PERFORM] Very long deletion time on a 200 GB database

2012-02-23 Thread Glyn Astill
Do you have any more detailed information about the hardware, what sort of disk 
configuration does it have?

Can you get onto the machine to look at what is using those resources?  You 
mention the 25gb of virtual memory; is that being used?  If so is it being used 
by postgres or something else?  If it's being used by postgres you should 
change postgresql.conf to work within your 5gb, otherwise can you stop the 
other applications to do your delete?  A snapshot from task manager or process 
monitor of process resource usage would be useful, even better somelogging from 
perfmon including physical disk usage.


What would be even more useful is the table definitions; you mention trying to 
drop constraints to speed it up but is there anything else at play, e.g. 
triggers?




 From: Reuven M. Lerner reu...@lerner.co.il
To: pgsql-performance@postgresql.org 
Sent: Thursday, 23 February 2012, 8:39
Subject: [PERFORM] Very long deletion time on a 200 GB database
 


Hi, everyone.  I'm maintaining an application that exists as a black box in 
manufacturing plants.  The system is based on Windows, .NET, and PostgreSQL 
8.3.  I'm a Unix kind of guy myself, but the application layer and system 
administration are being handled by other people; I'm just the PostgreSQL guy.

Because of the nature of the application, we don't have direct control over 
what happens.  And it turns out that at one installation, we're quickly 
running out of disk space.  The database is already taking up about 200 GB of 
space, and is growing by 1 GB or so a day.  Switching disks, either to a 
larger/faster traditional drive, or even to a SSD, is not an option.  (And 
yes, I know that SSDs have their own risks, but I'm just throwing that out as 
one option.)

Right now, the best solution to the space problem is to delete information 
associated with old records, where old is from at least 30 days ago.  The 
old records are spread across a few tables, including many large objects.  
(The application was written by people who were new to PostgreSQL, and didn't 
realize that they could use BYTEA.) Basically, given a foreign key B.a_id that 
points to table A, I want to DELETE all in B where A's creation date is at 
least 30 days ago.

Unfortunately, when we implemented this simple delete, it executed slower than 
molasses, taking about 9 hours to do its thing.   Not only does this seem like 
a really, really long time to do such deleting, but we have only a 4-hour 
window in which to run this maintenance activity, before the factory starts to 
use our black box again.

I've tried a few approaches so far, none of which have been hugely 
successful.  The fact that it takes several hours to test each theory is 
obviously a bit of a pain, and so I'm curious to hear suggestions from people 
here.

I should note that my primary concern is available RAM.  The database, as I 
wrote, is about 200 GB in size, and PostgreSQL is reporting (according to 
Windows) use of about 5 GB RAM, plus another 25 GB of virtual memory.  I've 
told the Windows folks on this project that virtual memory kills a database, 
and that it shouldn't surprise us to have horrible performance if the database 
and operating system are both transferring massive amounts of data back and 
forth.  But there doesn't seem to be a good way to handle this

This is basically what I'm trying to execute:

DELETE FROM  B
WHERE r_id IN (SELECT R.id
     FROM R, B
    WHERE r.end_date  (NOW() - (interval '1 day' * 30))
      AND r.id = b.r_id

(1) I tried to write this as a join, rather than a subselect.  But B has an 
oid column that points to large objects, and on which we have a rule that 
removes the associated large object when a row in B is removed. Doing the 
delete as a join resulted in no such large object with an oid of xxx 
errors.  (I'm not sure why, although it might have to do with the rule.)

(2) I tried to grab the rows that *do* interest me, put them into a temporary 
table, TRUNCATE the existing table, and then copy the rows back.   I only 
tested that with a 1 GB subset of the data, but that took longer than other 
options.

(3) There are some foreign-key constraints on the B table.  I thought that 
perhaps doing a mass DELETE was queueing up all of those constraints, and 
possibly using up lots of memory and/or taking a long time to execute.  I thus 
rewrote my queries such that they first removed the constraints, then executed 
the DELETE, and then restored the constraints.  That didn't seem to improve 
things much either, and took a long time (30 minutes) just to remove the 
constraints.  I expected re-adding the constraints to take a while, but 
shouldn't removing them be relatively quick?

(4) I tried chunking the deletes, such that instead of trying to delete all 
of the records from the B table, I would instead delete just those associated 
with 100 or 200 rows from the R table.  On a 1 GB subset of the data, this 
seemed to work 

Re: [PERFORM] Linux: more cores = less concurrency.

2011-04-13 Thread Glyn Astill
--- On Tue, 12/4/11, Greg Smith g...@2ndquadrant.com wrote:

 From: Greg Smith g...@2ndquadrant.com
 Subject: Re: [PERFORM] Linux: more cores = less concurrency.
 To: Kevin Grittner kevin.gritt...@wicourts.gov
 Cc: da...@lang.hm, Steve Clark scl...@netwolves.com, Glyn Astill 
 glynast...@yahoo.co.uk, Joshua D. Drake j...@commandprompt.com, Scott 
 Marlowe scott.marl...@gmail.com, pgsql-performance@postgresql.org
 Date: Tuesday, 12 April, 2011, 18:00
 Kevin Grittner wrote:
  Glyn Astill glynast...@yahoo.co.uk
 wrote:
     
  Results from Greg Smiths stream_scaling test are
 here:
  
  http://www.privatepaste.com/4338aa1196
      
   Well, that pretty much clinches it.  Your
 RAM access tops out at 16
  processors.  It appears that your processors are
 spending most of
  their time waiting for and contending for the RAM
 bus.
    
 
 I've pulled Glyn's results into https://github.com/gregs1104/stream-scaling 
 so they're
 easy to compare against similar processors, his system is
 the one labled 4 X X7550.  I'm hearing this same story
 from multiple people lately:  these 32+ core servers
 bottleneck on aggregate memory speed with running PostgreSQL
 long before the CPUs are fully utilized.  This server
 is close to maximum memory utilization at 8 cores, and the
 small increase in gross throughput above that doesn't seem
 to be making up for the loss in L1 and L2 thrashing from
 trying to run more.  These systems with many cores can
 only be used fully if you have a program that can work
 efficiency some of the time with just local CPU
 resources.  That's very rarely the case for a database
 that's moving 8K pages, tuple caches, and other forms of
 working memory around all the time.
 
 
  I have gotten machines in where moving a jumper,
 flipping a DIP
  switch, or changing BIOS options from the default made
 a big
  difference.  I'd be looking at the manuals for my
 motherboard and
  BIOS right now to see what options there might be to
 improve that
 
 I already forwarded Glyn a good article about tuning these
 Dell BIOSs in particular from an interesting blog series
 others here might like too:
 
 http://bleything.net/articles/postgresql-benchmarking-memory.html
 
 Ben Bleything is doing a very thorough walk-through of
 server hardware validation, and as is often the case he's
 already found one major problem with the vendor config he
 had to fix to get expected results.
 

Thanks Greg.  I've been through that post, but unfortunately there's no 
settings that make a difference.

However upon further investigation and looking at the manual for the R910 here

http://support.dell.com/support/edocs/systems/per910/en/HOM/HTML/install.htm#wp1266264

I've discovered we only have 4 of the 8 memory risers, and the manual states 
that in this configuration we are running in Power Optimized mode, rather 
than Performance Optimized.

We've got two of these machines, so I've just pulled all the risers from one 
system, removed half the memory as indicated by that document from Dell above, 
and now I'm seeing almost double the throughput.



-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Linux: more cores = less concurrency.

2011-04-12 Thread Glyn Astill
--- On Tue, 12/4/11, Merlin Moncure mmonc...@gmail.com wrote:

  The issue I'm seeing is that 8 real cores
 outperform 16 real
  cores, which outperform 32 real cores under high
 concurrency.
 
  With every benchmark I've done of PostgreSQL, the
 knee in the
  performance graph comes right around ((2 * cores) +
  effective_spindle_count).  With the database fully
 cached (as I
  believe you mentioned), effective_spindle_count is
 zero.  If you
  don't use a connection pool to limit active
 transactions to the
  number from that formula, performance drops off.  The
 more CPUs you
  have, the sharper the drop after the knee.
 
 I was about to say something similar with some canned
 advice to use a
 connection pooler to control this.  However, OP
 scaling is more or
 less topping out at cores / 4...yikes!.  Here are my
 suspicions in
 rough order:
 
 1. There is scaling problem in client/network/etc. 
 Trivially
 disproved, convert the test to pgbench -f and post results
 2. The test is in fact i/o bound. Scaling is going to be
 hardware/kernel determined.  Can we see
 iostat/vmstat/top snipped
 during test run?  Maybe no-op is burning you?

This is during my 80 clients test, this is a point at which the performance is 
well below that of the same machine limited to 8 cores.

http://www.privatepaste.com/dc131ff26e

 3. Locking/concurrency issue in heavy_seat_function()
 (source for
 that?)  how much writing does it do?
 

No writing afaik - its a select with a few joins and subqueries - I'm pretty 
sure it's not writing out temp data either, but all clients are after the same 
data in the test - maybe theres some locks there?

 Can we see some iobound and cpubound pgbench runs on both
 servers?
 

Of course, I'll post when I've gotten to that.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Linux: more cores = less concurrency.

2011-04-12 Thread Glyn Astill
--- On Tue, 12/4/11, Scott Marlowe scott.marl...@gmail.com wrote:

 From: Scott Marlowe scott.marl...@gmail.com
 Subject: Re: [PERFORM] Linux: more cores = less concurrency.
 To: Glyn Astill glynast...@yahoo.co.uk
 Cc: pgsql-performance@postgresql.org
 Date: Tuesday, 12 April, 2011, 6:55
 On Mon, Apr 11, 2011 at 7:04 AM, Glyn
 Astill glynast...@yahoo.co.uk
 wrote:
  Hi Guys,
 
  I'm just doing some tests on a new server running one
 of our heavy select functions (the select part of a plpgsql
 function to allocate seats) concurrently.  We do use
 connection pooling and split out some selects to slony
 slaves, but the tests here are primeraly to test what an
 individual server is capable of.
 
  The new server uses 4 x 8 core Xeon X7550 CPUs at
 2Ghz, our current servers are 2 x 4 core Xeon E5320 CPUs at
 2Ghz.
 
  What I'm seeing is when the number of clients is
 greater than the number of cores, the new servers perform
 better on fewer cores.
 
 O man, I completely forgot the issue I ran into in my
 machines, and
 that was that zone_reclaim completely screwed postgresql
 and file
 system performance.  On machines with more CPU nodes
 and higher
 internode cost it gets turned on automagically and
 destroys
 performance for machines that use a lot of kernel cache /
 shared
 memory.
 
 Be sure and use sysctl.conf to turn it off:
 
 vm.zone_reclaim_mode = 0
 

I've made this change, not seen any immediate changes however it's good to 
know. Thanks Scott.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Linux: more cores = less concurrency.

2011-04-12 Thread Glyn Astill
--- On Mon, 11/4/11, Kevin Grittner kevin.gritt...@wicourts.gov wrote:

 From: Kevin Grittner kevin.gritt...@wicourts.gov
 Subject: Re: [PERFORM] Linux: more cores = less concurrency.
 To: da...@lang.hm, Steve Clark scl...@netwolves.com, Kevin Grittner 
 kevin.gritt...@wicourts.gov, Glyn Astill glynast...@yahoo.co.uk
 Cc: Joshua D. Drake j...@commandprompt.com, Scott Marlowe 
 scott.marl...@gmail.com, pgsql-performance@postgresql.org
 Date: Monday, 11 April, 2011, 22:35
 Kevin Grittner kevin.gritt...@wicourts.gov
 wrote:
  
  I don't know why you were hitting the knee sooner than
 I've seen
  in my benchmarks
  
 If you're compiling your own executable, you might try
 boosting
 LOG2_NUM_LOCK_PARTITIONS (defined in lwlocks.h) to 5 or
 6.  The
 current value of 4 means that there are 16 partitions to
 spread
 contention for the lightweight locks which protect the
 heavyweight
 locking, and this corresponds to your best throughput
 point.  It
 might be instructive to see what happens when you tweak the
 number
 of partitions.
  

Tried tweeking LOG2_NUM_LOCK_PARTITIONS between 5 and 7. My results took a dive 
when I changed to 32 partitions, and improved as I increaced to 128, but 
appeared to be happiest at the default of 16.

 Also, if you can profile PostgreSQL at the sweet spot and
 again at a
 pessimal load, comparing the profiles should give good
 clues about
 the points of contention.
  

Results for the same machine on 8 and 32 cores are here:

http://www.8kb.co.uk/server_benchmarks/dblt_results.csv

Here's the sweet spot for 32 cores, and the 8 core equivalent:

http://www.8kb.co.uk/server_benchmarks/iostat-32cores_32Clients.txt
http://www.8kb.co.uk/server_benchmarks/vmstat-32cores_32Clients.txt

http://www.8kb.co.uk/server_benchmarks/iostat-8cores_32Clients.txt
http://www.8kb.co.uk/server_benchmarks/vmstat-8cores_32Clients.txt

... and at the pessimal load for 32 cores, and the 8 core equivalent:

http://www.8kb.co.uk/server_benchmarks/iostat-32cores_100Clients.txt
http://www.8kb.co.uk/server_benchmarks/vmstat-32cores_100Clients.txt

http://www.8kb.co.uk/server_benchmarks/iostat-8cores_100Clients.txt
http://www.8kb.co.uk/server_benchmarks/vmstat-8cores_100Clients.txt
   
vmstat shows double the context switches on 32 cores, could this be a factor? 
Is there anything else I'm missing there?

Cheers
Glyn

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Linux: more cores = less concurrency.

2011-04-12 Thread Glyn Astill
--- On Tue, 12/4/11, Merlin Moncure mmonc...@gmail.com wrote:

  Can we see some iobound and cpubound pgbench
 runs on both
  servers?
 
 
  Of course, I'll post when I've gotten to that.
 
  Ok, there's no writing going on -- so the i/o tets
 aren't necessary.
  Context switches are also not too high -- the problem
 is likely in
  postgres or on your end.
 
  However, I Would still like to see:
  pgbench select only tests:
  pgbench -i -s 1
  pgbench -S -c 8 -t 500
  pgbench -S -c 32 -t 500
  pgbench -S -c 80 -t 500
 
  pgbench -i -s 500
  pgbench -S -c 8 -t 500
  pgbench -S -c 32 -t 500
  pgbench -S -c 80 -t 500
 
  write out bench.sql with:
  begin;
  select * from heavy_seat_function();
  select * from heavy_seat_function();
  commit;
 
  pgbench -n bench.sql -c 8 -t 500
  pgbench -n bench.sql -c 8 -t 500
  pgbench -n bench.sql -c 8 -t 500
 
 whoops:
 pgbench -n bench.sql -c 8 -t 500
 pgbench -n bench.sql -c 32 -t 500
 pgbench -n bench.sql -c 80 -t 500
 
 merlin
 

Right, here they are:

http://www.privatepaste.com/3dd777f4db



-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Linux: more cores = less concurrency.

2011-04-12 Thread Glyn Astill
--- On Tue, 12/4/11, Kevin Grittner kevin.gritt...@wicourts.gov wrote:

 Wow, zero idle and zero wait, and single digit for
 system.  Did you
 ever run those RAM speed tests?  (I don't remember
 seeing results
 for that -- or failed to recognize them.)  At this
 point, my best
 guess at this point is that you don't have the bandwidth to
 RAM to
 support the CPU power.  Databases tend to push data
 around in RAM a
 lot.

I mentioned sysbench was giving me something like 3000 MB/sec on memory write 
tests, but nothing more.

Results from Greg Smiths stream_scaling test are here:

http://www.privatepaste.com/4338aa1196

  
 When I mentioned profiling, I was thinking more of oprofile
 or
 something like it.  If it were me, I'd be going there
 by now.
  

Advice taken, it'll be my next step.

Glyn

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Linux: more cores = less concurrency.

2011-04-11 Thread Glyn Astill
Hi Guys,

I'm just doing some tests on a new server running one of our heavy select 
functions (the select part of a plpgsql function to allocate seats) 
concurrently.  We do use connection pooling and split out some selects to slony 
slaves, but the tests here are primeraly to test what an individual server is 
capable of.

The new server uses 4 x 8 core Xeon X7550 CPUs at 2Ghz, our current servers are 
2 x 4 core Xeon E5320 CPUs at 2Ghz.

What I'm seeing is when the number of clients is greater than the number of 
cores, the new servers perform better on fewer cores.

Has anyone else seen this behaviour?  I'm guessing this is either a hardware 
limitation or something to do with linux process management / scheduling? Any 
idea what to look into?

My benchmark utility is just using a little .net/npgsql app that runs 
increacing numbers of clients concurrently, each client runs a specified number 
of iterations of any sql I specify.

I've posted some results and the test program here:

http://www.8kb.co.uk/server_benchmarks/


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Linux: more cores = less concurrency.

2011-04-11 Thread Glyn Astill


--- On Mon, 11/4/11, Joshua D. Drake j...@commandprompt.com wrote:

 From: Joshua D. Drake j...@commandprompt.com
 Subject: Re: [PERFORM] Linux: more cores = less concurrency.
 To: Kevin Grittner kevin.gritt...@wicourts.gov
 Cc: pgsql-performance@postgresql.org, Glyn Astill glynast...@yahoo.co.uk
 Date: Monday, 11 April, 2011, 19:12
 On Mon, 11 Apr 2011 13:09:15 -0500,
 Kevin Grittner
 kevin.gritt...@wicourts.gov
 wrote:
  Glyn Astill glynast...@yahoo.co.uk
 wrote:
   
  The new server uses 4 x 8 core Xeon X7550 CPUs at
 2Ghz
   
  Which has hyperthreading.
   
  our current servers are 2 x 4 core Xeon E5320 CPUs
 at 2Ghz.
   
  Which doesn't have hyperthreading.
   

Yep, off. If you look at the benchmarks I took, HT absoloutely killed it.

  PostgreSQL often performs worse with hyperthreading
 than without. 
  Have you turned HT off on your new machine?  If
 not, I would start
  there.
 
 And then make sure you aren't running CFQ.
 
 JD
 

Not running CFQ, running the no-op i/o scheduler.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Linux: more cores = less concurrency.

2011-04-11 Thread Glyn Astill
--- On Mon, 11/4/11, Scott Marlowe scott.marl...@gmail.com wrote:

 Just FYI, in synthetic pgbench type benchmarks, a 48 core
 AMD Magny
 Cours with LSI HW RAID and 34 15k6 Hard drives scales
 almost linearly
 up to 48 or so threads, getting into the 7000+ tps
 range.  With SW
 RAID it gets into the 5500 tps range.
 

I'll have to try with the synthetic benchmarks next then, but somethings 
definately going off here.  I'm seeing no disk activity at all as they're 
selects and all pages are in ram.
 
I was wondering if anyone had any deeper knowledge of any kernel tunables, or 
anything else for that matter.

A wild guess is something like multiple cores contending for cpu cache, cpu 
affinity, or some kind of contention in the kernel, alas a little out of my 
depth.

It's pretty sickening to think I can't get anything else out of more than 8 
cores.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Linux: more cores = less concurrency.

2011-04-11 Thread Glyn Astill
--- On Mon, 11/4/11, da...@lang.hm da...@lang.hm wrote:

 From: da...@lang.hm da...@lang.hm
 Subject: Re: [PERFORM] Linux: more cores = less concurrency.
 To: Steve Clark scl...@netwolves.com
 Cc: Scott Marlowe scott.marl...@gmail.com, Joshua D. Drake 
 j...@commandprompt.com, Kevin Grittner kevin.gritt...@wicourts.gov, 
 pgsql-performance@postgresql.org, Glyn Astill glynast...@yahoo.co.uk
 Date: Monday, 11 April, 2011, 21:04
 On Mon, 11 Apr 2011, Steve Clark
 wrote:
 
 the limit isn't 8 cores, it's that the hyperthreaded cores
 don't work well with the postgres access patterns.
 

This has nothing to do with hyperthreading. I have a hyperthreaded benchmark 
purely for completion, but can we please forget about it.

The issue I'm seeing is that 8 real cores outperform 16 real cores, which 
outperform 32 real cores under high concurrency.

32 cores is much faster than 8 when I have relatively few clients, but as the 
number of clients is scaled up 8 cores wins outright.

I was hoping someone had seen this sort of behaviour before, and could offer 
some sort of explanation or advice.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Linux: more cores = less concurrency.

2011-04-11 Thread Glyn Astill


--- On Mon, 11/4/11, Scott Marlowe scott.marl...@gmail.com wrote:

 From: Scott Marlowe scott.marl...@gmail.com
 Subject: Re: [PERFORM] Linux: more cores = less concurrency.
 To: Glyn Astill glynast...@yahoo.co.uk
 Cc: Kevin Grittner kevin.gritt...@wicourts.gov, Joshua D. Drake 
 j...@commandprompt.com, pgsql-performance@postgresql.org
 Date: Monday, 11 April, 2011, 21:52
 On Mon, Apr 11, 2011 at 1:42 PM, Glyn
 Astill glynast...@yahoo.co.uk
 wrote:
 
  A wild guess is something like multiple cores
 contending for cpu cache, cpu affinity, or some kind of
 contention in the kernel, alas a little out of my depth.
 
  It's pretty sickening to think I can't get anything
 else out of more than 8 cores.
 
 Have you tried running the memory stream benchmark Greg
 Smith had
 posted here a while back?  It'll let you know if
 you're memory is
 bottlenecking.  Right now my 48 core machines are the
 king of that
 benchmark with something like 70+Gig a second.
 

No I haven't, but I will first thing tomorow morning.  I did run a sysbench 
memory write test though, if I recall correctly that gave me somewhere just 
over 3000 Mb/s



-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Linux I/O schedulers - CFQ random seeks

2011-03-04 Thread Glyn Astill
Hi Guys,

I'm in the process of setting up some new hardware and am just doing some basic 
disk performance testing with bonnie++ to start with.

I'm seeing a massive difference on the random seeks test, with CFQ not 
performing very well as far as I can see.  The thing is I didn't see this sort 
of massive divide when doing tests with our current hardware.

Current hardware: 2x4core E5420 @2.5Ghz/ 32GB RAM/ Adaptec 5805Z w' 512Mb/ Raid 
10/ 8 15k 3.5 Disks
New hardware: 4x8core X7550 @2.0Ghz/ 128GB RAM/ H700 w' 1GB/ Raid 10/ 12 15.2k 
2.5 Disks

Admittedly, my testing on our current hardware was on 2.6.26 and on the new 
hardware it's on 2.6.32 - I think I'm going to have to check the current 
hardware on the older kernel too.

I'm wondering (and this may be a can of worms) what peoples opinions are on 
these schedulers?  I'm going to have to do some real world testing myself with 
postgresql too, but initially was thinking of switching from our current CFQ 
back to deadline.

Any opinions  would be appreciated.

Regardless, here are some sample results from the new hardware:

CFQ:

Version  1.96   --Sequential Output-- --Sequential Input- --Random-
Concurrency   1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
Way5ax  258376M   666  99 434709  96 225498  35  2840  69 952115  76 556.2  
 3
Latency 12344us 619ms 522ms 255ms 425ms 529ms
Version  1.96   --Sequential Create-- Random Create
Way5ax  -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
 16 28808  41 + +++ + +++ + +++ + +++ + +++
Latency  6170us 594us 633us7619us  20us  36us
1.96,1.96,Way5ax,1,1299173113,258376M,,666,99,434709,96,225498,35,2840,69,952115,76,556.2,3,16,28808,41,+,+++,+,+++,+,+++,+,+++,+,+++,12344us,619ms,522ms,255ms,425ms,529ms,6170us,594us,633us,7619us,20us,36us

deadline:

Version  1.96   --Sequential Output-- --Sequential Input- --Random-
Concurrency   1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
Way5ax  258376M   696  99 449914  96 287010  47  2952  69 989527  78  2304  
19
Latency 11939us 856ms 570ms 174ms 228ms   24744us
Version  1.96   --Sequential Create-- Random Create
Way5ax  -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
 16 31338  45 + +++ + +++ + +++ + +++ + +++
Latency  5605us 605us 627us6590us  19us  38us
1.96,1.96,Way5ax,1,1299237441,258376M,,696,99,449914,96,287010,47,2952,69,989527,78,2304,19,16,31338,45,+,+++,+,+++,+,+++,+,+++,+,+++,11939us,856ms,570ms,174ms,228ms,24744us,5605us,605us,627us,6590us,19us,38us

no-op:

Version  1.96   --Sequential Output-- --Sequential Input- --Random-
Concurrency   1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
Way5ax  258376M   706  99 451578  95 303351  49  4104  96 1003688  78  2294 
 19
Latency 11538us 530ms1460ms   12141us 350ms   22969us
Version  1.96   --Sequential Create-- Random Create
Way5ax  -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
 16 31137  44 + +++ + +++ + +++ + +++ + +++
Latency  5918us 597us 627us5039us  17us  36us
1.96,1.96,Way5ax,1,1299245225,258376M,,706,99,451578,95,303351,49,4104,96,1003688,78,2294,19,16,31137,44,+,+++,+,+++,+,+++,+,+++,+,+++,11538us,530ms,1460ms,12141us,350ms,22969us,5918us,597us,627us,5039us,17us,36us

--
Glyn




-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Which RAID Controllers to pick/avoid?

2011-02-03 Thread Glyn Astill
--- On Thu, 3/2/11, Greg Smith g...@2ndquadrant.com wrote:
 The 5405 and 5805 models do have a known problem where they overheat if
 you don't have enough cooling in the server box, with the 5805 seeming
 to be the bigger source of such issues.  See the reviews at
 http://www.newegg.com/Product/Product.aspx?Item=N82E16816103099 for
 example.  Scott Marlowe was griping recently about a similar issue in
 some of the LSI models, too.  I suspect it's a problem impacting
 several of the larger RAID cards that use the big Intel IOP processors
 for their RAID computations, given that's the part with the heatsink on
 it.

 Quick summary:  avoid the Adaptec 3405 and 4800.  Rest are decent
 cards.  Just make sure you monitor the temperatures in your case (and
 the card too if arcconf lets you, I haven't checked for that yet) if
 you end up with a 5405/5805.


I can attest to the 5805 and 5805Z cards running a little hot, the ones we're 
running tend to run in the high 60s and low 70s Celsius with fairly good 
airflow over them.

I've been running some 5805s for 3 years now, and 5805Zs for a year and they've 
been really good, stable, fast cards.  I monitor everything on them (including 
temperature) with nagios and a simple script that uses the arcconf utility.

Glyn





-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] SATA drives performance

2009-12-27 Thread Glyn Astill
--- On Fri, 25/12/09, Scott Marlowe scott.marl...@gmail.com wrote:

 It does kind of knock the stuffing out of the argument that
 buying
 from the big vendors ensures good hardware
 experiences.  I've had
 similar problems from all the big vendors in the
 past.  I can't
 imagine getting treated that way by my current
 supplied.  It's one
 thing for some obscure bug in a particular ubuntu kernel to
 interact
 poorly with a piece of equipment, but when a hardware RAID
 controller
 arrives in a basically broken state, that's
 inexcusable.  It's really
 not too much to expect working hardware on arrival.

Last month I found myself taking a powerdrill to our new dell boxes in order to 
route cables to replacement raid cards. Having to do that made me feel really 
unprofessional and a total cowboy, but it was either that or shitty performance.






-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] RAID card recommendation

2009-11-25 Thread Glyn Astill
--- On Tue, 24/11/09, Scott Marlowe scott.marl...@gmail.com wrote:

 Jochen Erwied
 joc...@pgsql-performance.erwied.eu
 wrote:
 
  Since I'm currently looking at upgrading my own
 database server, maybe some
  of the experts can give a comment on one of the
 following controllers:
 
  - Promise Technology Supertrak ES4650 + additional
 BBU
  - Adaptec RAID 5405 SGL/256 SATA/SAS + additional BBU
  - Adaptec RAID 5405Z SGL/512 SATA/SAS
 
  My personal favourite currently is the 5405Z, since it
 does not require
  regular battery replacements and because it has 512MB
 of cache.
 
 Have you searched the -performance archives for references
 to them?
 I'm not that familiar with Adaptec RAID controllers. 
 Not requiring a
 battery check / replacement is nice.
 

We've been running Adaptec 5805s for the past year and I've been pretty happy, 
I think they have the same dual core IOP348 as the Areca 1680s.

I've a bunch of 5805Zs on my desk ready to go in some new servers too (that 
means more perc6 cards to chuck on my smash pile) and I'm excited to see how 
they go; I feer the unknown a bit though, and I'm not sure the sight big 
capacitors is reassuruing me...

Only problem I've seen is one controller periodically report it's too hot, but 
I suspect that may be something to do with the server directly above it having 
fanless power supplies.




-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Adaptec Zero-Maintenance Cache Protection - Anyone using?

2009-11-11 Thread Glyn Astill
Hi Chaps,

I'm putting together some new servers, and whilst I've been happy with our 
current config of Adaptec 5805's with bbu I've noticed these 5805Z cards, 
apparently the contents of DRAM is copied into onboard flash upon power failure.

Just wondered if anyone had any experience of this sort of technology yet?

So far my head is telling me to just go with what I know...


Glyn




-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] [OT] Recommended whitebox server vendors in the UK?

2009-10-15 Thread Glyn Astill
Hi chaps,

Can anyone recommend a decent server vendor in the UK?

I'm looking to deploy a new machine to handle some of our non-critical data, 
and I'm just wondering if I can avoid the pains I've had with dell hardware 
recently.

Also whilst I'm asking, does anyone else find their dell BMC/DRAC interfaces 
just die from time to time?

Glyn

Send instant messages to your online friends http://uk.messenger.yahoo.com

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Used computers?

2009-07-20 Thread Glyn Astill

Here in the UK, we have Waste electrical and electronic equipment (WEEE) 
companies that'll safely destroy or sell them on for a cut of the profits.

--- On Mon, 20/7/09, Craig James craig_ja...@emolecules.com wrote:

 From: Craig James craig_ja...@emolecules.com
 Subject: [PERFORM] Used computers?
 To: pgsql-performance@postgresql.org
 Date: Monday, 20 July, 2009, 3:29 PM
 Apologies for a slightly off-topic
 question ... a friend is overseeing the demise of a company
 and has several computers that they need to get rid
 of.  She's an attorney and knows little about them
 except that they're IBM and cost $50K originally. 
 Where does one go to sell equipment like this, and/or get a
 rough idea of its worth?
 
 Thanks,
 Craig
 
 -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-performance
 




-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] running bonnie++

2009-05-27 Thread Glyn Astill

You should be able to get a good idea of the options from man bonnie++. I've 
always just used the defaults with bonnie++

Also, you'll find Gregs older notes are here

http://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm

--- On Wed, 27/5/09, Alan McKay alan.mc...@gmail.com wrote:

 From: Alan McKay alan.mc...@gmail.com
 Subject: [PERFORM] running bonnie++
 To: pgsql-performance@postgresql.org
 Date: Wednesday, 27 May, 2009, 2:24 PM
 Hey folks,
 
 During Greg Smith's lecture last week I could have sworn I
 saw on the
 screen at some point a really long command line for
 bonnie++ - with
 all the switches he uses.
 
 But checking his slides I don't see this.
 
 Am I mis-remembering?
 
 Can someone recommend the best way to run it?  What
 combination of switches?
 
 thanks,
 -Alan
 
 -- 
 “Mother Nature doesn’t do bailouts.”
          - Glenn Prickett
 
 -- 
 Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-performance
 




-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] 2.6.26 kernel and PostgreSQL

2009-04-13 Thread Glyn Astill




--- On Mon, 13/4/09, Greg Smith gsm...@gregsmith.com wrote:

 From: Greg Smith gsm...@gregsmith.com
 Subject: Re: [PERFORM] 2.6.26 kernel and PostgreSQL
 To: Glyn Astill glynast...@yahoo.co.uk
 Cc: pgsql-performance@postgresql.org, Kevin Grittner 
 kevin.gritt...@wicourts.gov
 Date: Monday, 13 April, 2009, 9:25 AM
 On Fri, 10 Apr 2009, Glyn Astill wrote:
 
  So it was only for connections over a unix socket, but
 wow; it's still an ongoing issue.
 
 The problem is actually with pgbench when running on a UNIX
 socket, not with the PostgreSQL server itself.  On my tests,
 the actual database server itself seems to work just as well
 or better on later kernels that use the new scheduler than
 the older scheduler did.
 
 Basically, if all these apply:
 
 1) You are running pgbench
 2) You're running a quick statement, such as a simple
 select, that gives 
  1TPS or so
 3) Connecting via UNIX socket
 4) Clients  around 10
 5) Linux kernel =2.6.23 (which means CFS as the
 scheduler)
 6) The CFS features are at their defaults
 (SCHED_FEAT_SYNC_WAKEUPS is on)
 
 You'll get weird results.  Change any of those and
 things are still fine.
 

Ace, I'll upgrade today then. Thanks Greg




-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] 2.6.26 kernel and PostgreSQL

2009-04-10 Thread Glyn Astill

Hi chaps,

Is anyone using 2.6.26 with postgres?  I was thinking about shifting my home 
test machine up from 2.6.18, however I recall reading a post somewhere a while 
back about the scheduler in more recent versions being a bit cranky...

I just thought I'd ask before I go ahead, I don't have too much time for 
testing etc at the moment.

thanks
Glyn




-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] 2.6.26 kernel and PostgreSQL

2009-04-10 Thread Glyn Astill




--- On Fri, 10/4/09, Kevin Grittner kevin.gritt...@wicourts.gov wrote:

 Glyn Astill glynast...@yahoo.co.uk wrote: 
  I was thinking about shifting my home test machine up
 from 2.6.18,
  however I recall reading a post somewhere a while back
 about the
  scheduler in more recent versions being a bit
 cranky...
  
 A recent post on the topic:
  
 http://archives.postgresql.org/pgsql-performance/2009-04/msg00098.php
  
 -Kevin
 

So it was only for connections over a unix socket, but wow; it's still an 
ongoing issue.  Nice to see somebody is on top of it though.





-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Prepared statement does not exist

2009-03-20 Thread Glyn Astill

--- On Fri, 20/3/09, Nimesh Satam nimesh.z...@gmail.com wrote:

 From: Nimesh Satam nimesh.z...@gmail.com
   We are receving the following error in the
 postgres
   database logs:
  
   2009-03-19 02:14:20 PDT [2547]: [79-1] LOG: 
 duration:
   0.039 ms  statement:
   RESET ALL
   2009-03-19 02:14:20 PDT [2547]: [80-1] LOG: 
 duration:
   0.027 ms  statement:
   SET SESSION AUTHORIZATION DEFAULT
   2009-03-19 02:14:20 PDT [2547]: [81-1] ERROR: 
 prepared
   statement S_1 does
   not exist
   2009-03-19 02:14:20 PDT [2547]: [82-1] STATEMENT:
   DEALLOCATE S_1
   2009-03-19 02:14:20 PDT [2547]: [83-1] ERROR: 
 prepared
   statement S_4 does
   not exist
   2009-03-19 02:14:20 PDT [2547]: [84-1] STATEMENT:
   DEALLOCATE S_4
  
   We receive this errors when we start connecting
 the java
   application
   thorugh pgpool. What causes this problem and how
 can it be
   avoided?
 
  Looks like your app is dissconnecting from pgpool
 which is causing pgpool
  to send the RESET ALL, this will deallocate the
 prepared statement. Then the
  app is reconnecting to pgpool again and expecting the
 prepared statement to
  still be available, which it will not be.
 
 Thank for your reply. But can you confirm on this? As what
 I see from
 the logs, its pgpool which is trying to deallocate the
 prepared
 statement and not the application. The application just
 disconnects
 and not tyring to use the same connection.

There is the possibility that it's pgpool sending the deallocate in error after 
the reset all then. Either way, this is not relevent to the performance list, 
send it over to the pgpool list... and tell them your pgpool version number too 
- it may be a fixed bug.




-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Prepared statement does not exist

2009-03-19 Thread Glyn Astill




--- On Thu, 19/3/09, Nimesh Satam nimesh.z...@gmail.com wrote:
 
 We are receving the following error in the postgres
 database logs:
 
 2009-03-19 02:14:20 PDT [2547]: [79-1] LOG:  duration:
 0.039 ms  statement:
 RESET ALL
 2009-03-19 02:14:20 PDT [2547]: [80-1] LOG:  duration:
 0.027 ms  statement:
 SET SESSION AUTHORIZATION DEFAULT
 2009-03-19 02:14:20 PDT [2547]: [81-1] ERROR:  prepared
 statement S_1 does
 not exist
 2009-03-19 02:14:20 PDT [2547]: [82-1] STATEMENT: 
 DEALLOCATE S_1
 2009-03-19 02:14:20 PDT [2547]: [83-1] ERROR:  prepared
 statement S_4 does
 not exist
 2009-03-19 02:14:20 PDT [2547]: [84-1] STATEMENT: 
 DEALLOCATE S_4
 
 We receive this errors when we start connecting the java
 application
 thorugh pgpool. What causes this problem and how can it be
 avoided?

Looks like your app is dissconnecting from pgpool which is causing pgpool to 
send the RESET ALL, this will deallocate the prepared statement. Then the app 
is reconnecting to pgpool again and expecting the prepared statement to still 
be available, which it will not be.




-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] please help with the explain analyze plan

2009-02-11 Thread Glyn Astill
Both queries are using your uid index on each of the partitions not 
generated_date, it's doing the generated_date with a filter on most of the 
partitions.

This is except for on partition part_2006_02 in the second query where it uses 
your generated date index - and that takes the 80 secs.

 -  Index Scan using rfis_part_2006_02_generated_date on 
rfis_part_2006_02 rfis  (cost=0.00..6.45 rows=1 width=0) (actual 
time=80827.207..80827.207 rows=0 loops=1)

Also the second query appears to go through a few more partitions than the 
first, i.e. part_2001_2004 and part_1005


--- On Wed, 11/2/09, Rajesh Kumar Mallah mallah.raj...@gmail.com wrote:

 From: Rajesh Kumar Mallah mallah.raj...@gmail.com
 Subject: [PERFORM] please help with the explain analyze plan
 To: pgsql-performance@postgresql.org
 Date: Wednesday, 11 February, 2009, 10:58 AM
 Dear friends,
 
 I have explain analyze of two queries
 
 explain analyze SELECT count(*) from general.rfis where 
 1=1   and
 inquiry_type = 'BUY'and receiver_uid=1320721;
 (7 ms)
 http://pastebin.com/m5297e03c
 
 explain analyze SELECT count(*) from general.rfis where 
 1=1   and
 inquiry_type = 'BUY'and receiver_uid=1320721
 generated_date =
 2251  and ;
 (80 secs)
 http://pastebin.com/d1e4bdea7
 
 
 The table general.rfis is partitioned on generated_date and
 the
 condition generated_date = 2251
 was added with the intention to limit the number of (date
 based)
 partitions that would be searched
 by the query using the constraint exclusion facility.
 however as
 evident the query has become very
 slow as a result of this condition (even on primed caches).
 
 can anyone kindly explain why the result was so counter
 intuitive ?
 
 In particular where is most of the (80828.438 ms) spent on
 the plan
 http://pastebin.com/d1e4bdea7 (reference to actual line
 number is appreciated)
 
 
 
 structure of a typical partition (abridged)
 
  Table rfi_partitions.rfis_part_2009_01
 Column |  Type  |
  Modifiers
 ---++---
  rfi_id| integer| not null
 default
 nextval('general.rfis_rfi_id_seq'::regclass)
  sender_uid| integer| not null
  receiver_uid  | integer| not null
  subject   | character varying(100) | not null
  message   | text   | not null
  inquiry_type  | character varying(50)  | default
 'BUY'::character varying
  inquiry_source| character varying(30)  | not null
  generated_date| integer| not null
 default
 general.current_date_id()
 Indexes:
 rfis_part_2009_01_pkey PRIMARY KEY, btree
 (rfi_id)
 rfis_part_2009_01_generated_date btree
 (generated_date)
 rfis_part_2009_01_receiver_uid btree
 (receiver_uid) CLUSTER
 rfis_part_2009_01_sender_uid btree
 (sender_uid)
 Check constraints:
 rfis_part_2009_01_generated_date_check
 CHECK (generated_date =
 3289 AND generated_date = 3319)
 rfis_part_2009_01_rfi_id_check CHECK
 (rfi_id = 12344252 AND
 rfi_id = 12681399)
 Inherits: rfis
 
 regds
 rajesh kumar mallah.
 
 -- 
 Sent via pgsql-performance mailing list
 (pgsql-performance@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-performance




-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] scheduling autovacuum at lean hours only.

2009-02-11 Thread Glyn Astill

 From: Rajesh Kumar Mallah mallah.raj...@gmail.com

 Is it possible to configure autovacuum to run only
 during certain hours ? We are forced to keep
 it off because it pops up during the peak
 query hours.

AFAIK not directly within the conf.

However you could probably set up a shell script to turn it on and off as and 
when required via cron; just change the setting to off in the conf file and 
reload.




-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] suggestions for postgresql setup on Dell 2950 , PERC6i controller

2009-02-06 Thread Glyn Astill
--- On Fri, 6/2/09, Bruce Momjian br...@momjian.us wrote:

 Stupid question, but why do people bother with the Perc
 line of cards if
 the LSI brand is better?  It seems the headache of trying
 to get the
 Perc cards to perform is not worth any money saved.

I think in most cases the dell cards actually cost more, people end up stuck 
with them because they come bundled with their servers - they find out too late 
that they've got a lemon.

Up until recently those in charge of buying hardware where I work insisted 
everything be supplied from dell. Fortunately that policy is no more; I have 
enough paperweights.




-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] suggestions for postgresql setup on Dell 2950 , PERC6i controller

2009-02-05 Thread Glyn Astill



--- On Thu, 5/2/09, Matt Burke mattbli...@icritical.com wrote:

 From: Matt Burke mattbli...@icritical.com
 Subject: Re: [PERFORM] suggestions for postgresql setup on Dell 2950 , PERC6i 
  controller
 To: pgsql-performance@postgresql.org
 Date: Thursday, 5 February, 2009, 12:40 PM
 Arjen van der Meijden wrote:
 
  Afaik the Perc 5/i and /e are more or less rebranded
 LSI-cards (they're
  not identical in layout etc), so it would be a bit
 weird if they
  performed much less than the similar LSI's
 wouldn't you think?
 
 I've recently had to replace a PERC4/DC with the exact
 same card made by
 LSI (320-2) because the PERCs firmware was crippled. Its
 idea of RAID10
 actually appears to be concatenated RAID1 arrays.
 

Did you try flashing the PERC with the LSI firmware?

I tried flashing a PERC3/dc with LSI firmware, it worked fine but I saw no 
difference in performance so I assumed it must be somethign else on the board 
that cripples it.






-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] SSD performance

2009-01-23 Thread Glyn Astill
 I spotted a new interesting SSD review. it's a $379
 5.25 drive bay device that holds up to 8 DDR2 DIMMS
 (up to 8G per DIMM) and appears to the system as a SATA
 drive (or a pair of SATA drives that you can RAID-0 to get
 past the 300MB/s SATA bottleneck)
 

Sounds very similar to the Gigabyte iRam drives of a few years ago

http://en.wikipedia.org/wiki/I-RAM






-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] caching written values?

2009-01-22 Thread Glyn Astill
 
 A quick question, when pg receives data to be written to a
 table, does it cache that data in memory in case a
 subsequent request/query would need it?
 

Afaik all pages are modified in memory, so the modified data would still be 
cached.




-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] understanding postgres issues/bottlenecks

2009-01-11 Thread Glyn Astill
--- On Sun, 11/1/09, Scott Marlowe scott.marl...@gmail.com wrote:

 They also told me we could never lose power in the hosting
 center
 because it was so wonder and redundant and that I was
 wasting my time.

We'll that's just plain silly, at the very least there's always going to be 
some breakers / fuzes in between the power and the machines.

In fact in our building there's quite a few breaker points between our comms 
room on the 3rd floor and the ups / generator in the basement. It's a crappy 
implementation actually.





-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] understanding postgres issues/bottlenecks

2009-01-08 Thread Glyn Astill

--- On Thu, 8/1/09, Stefano Nichele stefano.nich...@gmail.com wrote:

 From: Stefano Nichele stefano.nich...@gmail.com
 Subject: Re: [PERFORM] understanding postgres issues/bottlenecks
 To: Scott Marlowe scott.marl...@gmail.com
 Cc: pgsql-performance@postgresql.org
 Date: Thursday, 8 January, 2009, 8:36 AM
 Find !
 
 Dell CERC SATA RAID 2 PCI SATA 6ch
 
 Running lspci -v:
 
 03:09.0 RAID bus controller: Adaptec AAC-RAID (rev 01)
 Subsystem: Dell CERC SATA RAID 2 PCI SATA 6ch
 (DellCorsair)
 Flags: bus master, 66MHz, slow devsel, latency 32,
 IRQ 209
 Memory at f800 (32-bit, prefetchable)
 [size=64M]
 Expansion ROM at fe80 [disabled] [size=32K]
 Capabilities: [80] Power Management version 2
 

Hmm, the 64M / 6ch makes it sound like this card

http://accessories.us.dell.com/sna/products/Controllers/productdetail.aspx?c=usl=ens=bsdcs=04sku=310-5975

Which is a 6ch dell version of

http://www.adaptec.com/en-US/support/raid/sata/AAR-2410SA/

I have one on my smash pile.  The only thing that makes me think otherwise is 
the 2 in CERC SATA RAID 2 ...





-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Fwd: Casting issue!!

2009-01-07 Thread Glyn Astill



--- On Wed, 7/1/09, jose fuenmayor jaf...@gmail.com wrote:
 
 Hi all I am trying to migrate from postgresql 8.2.x to 
 8.3.x, i have an
 issue with casting values when i try to perform the auto
 cast , it does not
 work and I get an error, how can i perform  auto casting on
 8.3 without
 rewrite my source code, I am using pl/pgsql. I mean i dont
 want to write
 value::dataType. I dont want to use explicit type cast.
 Maybe change
 something in the config files? to make it work like 8.2 on
 tha regard(cast
 values).
 thanks a lot!!!
 Kind Regards;
 Jose Fuenmayor

As the others have said; you'd be better off posting this in the pgsql-general 
list.

However I think you could proabably work your way around this with a function 
and a cast, as described here:

http://www.depesz.com/index.php/2008/05/05/error-operator-does-not-exist-integer-text-how-to-fix-it/





-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] understanding postgres issues/bottlenecks

2009-01-07 Thread Glyn Astill
--- On Wed, 7/1/09, Scott Marlowe scott.marl...@gmail.com wrote:

  The really bad news is that
 you can't
 generally plug in a real RAID controller on a Dell.  We put
 an Areca
 168-LP PCI-x8 in one of our 1950s and it wouldn't even
 turn on, got a
 CPU Error.
 

Hmm, I had to pull the perc5i's out of our dell servers to get them to boot off 
of our Adaptec 5805's

Anyway that reminds me, I must remember to bring my lump hammer into work at 
some point...




-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] understanding postgres issues/bottlenecks

2009-01-07 Thread Glyn Astill
--- On Wed, 7/1/09, Scott Marlowe scott.marl...@gmail.com wrote:

 Just to elaborate on the horror that is a Dell perc5e.  We
 have one in
 a 1950 with battery backed cache (256 Meg I think).  It has
 an 8 disk
 500Gig SATA drive RAID-10 array and 4 1.6GHz cpus and 10
 Gigs ram.

Our perc5i controllers performed better in raid 5 that 10. Sounds like the 
comment you made when I was wasting my time with that perc3 fits all dell cards 
perfectly; brain damaged




-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Perc 3 DC

2008-11-24 Thread Glyn Astill
--- On Mon, 24/11/08, Steve Clark [EMAIL PROTECTED] wrote:

  Yeah the battery's on it, that and the 128Mb is
 really the only reason I thought I'd give it a whirl.
  
  
 Is the battery  functioning? We found that the unit had to
 be on and charged before write back caching
 would work.

Yeah the battery is on there, and in the BIOS it says it's PRESENT and the 
status is GOOD.






-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Perc 3 DC

2008-11-24 Thread Glyn Astill
--- Scott Marlowe [EMAIL PROTECTED] wrote:

 
  Yeah the battery is on there, and in the BIOS it says it's
 PRESENT and the status is GOOD.
 
 If I remember correctly, older LSI cards had pretty poor
 performance
 in RAID 1+0 (or any layered RAID really).  Have you tried setting
 up
 RAID-1 pairs on the card and then striping them with the OS?
 

Not yet no, but that's a good suggestion and I do intend to give it a
whirl.  I get about 27MB/s from raid 1 (10 is about the same) so
hopefully I can up the throughput to the speed of about one disk with
sw raid.

For kicks I did try raid 5 on it; 6.9MB/s made it hard to resist
going to get the hammer, which is still a very attractive option.



  

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Perc 3 DC

2008-11-22 Thread Glyn Astill
Hi chaps,

I've had this old card sitting on my desk for a while. It appears to be a U160 
card with 128Mb BBU so I thought I'd wang it in my test machine (denian etch) 
and give it a bash.

I set up 4 36Gb drives in raid 0+1, but I don't seem to be able to get more 
than 20MB/s write speed out of it for large files (2XRAM usual tests with dd 
from dev/zero). I don't expect anything great, but I thought it'd do a little 
better than that.

I've tried writeback and write through modes, tried changing cache flush times, 
disabled and enabled multiple PCI delayed transactions, all seem to have little 
effect.

Finally I decided to wave goodbye to Dell's firmware. LSI has it down as a 
MegaRAID 493 elite 1600, so I flashed it with their latest firmware.  Doesn't 
seem to have helped either though.

Has anybody else used this card in the past?  I'm wondering if this is a driver 
issue, or if the card is and always was just crap? If so I'll proabably try sw 
raid on it instead.

Any thoughts?

Glyn




-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Perc 3 DC

2008-11-22 Thread Glyn Astill
--- On Sat, 22/11/08, Scott Marlowe [EMAIL PROTECTED] wrote:
 
 You really have two choices.  First is to try and use it as
 a plain
 SCSI card, maybe with caching turned on, and do the raid in
 software.
 Second is to cut it into pieces and make jewelry out of it.

Haha, I'm not really into jewelry, although I had thought of smacking it into a 
pile of dust with a lump hammer, that's much more my thing.

  Anything
 before the Perc 6 series is seriously brain damaged, and
 the Perc6
 brings the dell raid array line squarly in line with a 5
 year old LSI
 megaraid, give or take.  And that's being generous.
 

Well this card thinks it's a 5 year old lsi megaraid. I've got a pile of perc5i 
megaraid paperweights on my desk at work, so this was kinda expected really.

  I've tried writeback and write through modes,
 tried changing cache flush times, disabled and enabled
 multiple PCI delayed transactions, all seem to have little
 effect.
 
 Yeah, it's like trying to performance tune a yugo.
 

Did I mention I drive a yugo?

  Finally I decided to wave goodbye to Dell's
 firmware. LSI has it down as a MegaRAID 493 elite 1600, so I
 flashed it with their latest firmware.  Doesn't seem to
 have helped either though.
 
 Does it have a battery backup module?  Often you can't
 really turn on
 write-back without one.  That would certainly slow things
 down.  But
 you should certainly expect  20 M/s on a modern RAID
 controller
 writing out to a 4 disk RAID10
 

Yeah the battery's on it, that and the 128Mb is really the only reason I 
thought I'd give it a whirl.




-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Perc 3 DC

2008-11-22 Thread Glyn Astill
--- On Sat, 22/11/08, Scott Marlowe [EMAIL PROTECTED] wrote:

 I had an old workstation with a 4 port SATA card (no raid) running
 software raid and it handily stomps this 8 disk machine into the ground.

Yeah, I think this machine will be going that route.

 We had a bunch of 18xx series servers last company I was at
 (we went
 from unix / linux to Microsoft, so ordered some 400
 machines to
 replace a dozen or so unix machines) 

I'm not surprised. We've just had some management inserted to make decisions 
like that for us. Honestly if I get asked one more time why we're not utilizing 
iSCSI or insert buzzword here more  But that's another matter.

 
 I think as much as anything the busses on the dells are the
 problem,
 resulting in pretty poor throughput, especially true of the
 old
 serverworks chipset machines.  Those things are pretty much
 boat
 anchors.

Funny that, possibly explains some of the useless supermicro hardware I had a 
while back.




-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Filesystem benchmarking for pg 8.3.3 server

2008-08-11 Thread Glyn Astill
 
  It feels like there is something fishy going on.
 Maybe the RAID 10  
  implementation on the PERC/6e is crap?
 

It's possible.  We had a bunch of perc/5i SAS raid cards in our servers that 
performed quite well in Raid 5 but were shite in Raid 10.  I switched them out 
for Adaptec 5808s and saw a massive improvement in Raid 10.


  __
Not happy with your email address?.
Get the one you really want - millions of new email addresses available now at 
Yahoo! http://uk.docs.yahoo.com/ymail/new.html

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Mailing list hacked by spammer?

2008-07-18 Thread Glyn Astill
Most likely just a forged header or something, hardly hacked though is it.  I 
think you need to do some training: 
http://www2.b3ta.com/bigquiz/hacker-or-spacker/



- Original Message 
 From: Craig James [EMAIL PROTECTED]
 To: pgsql-performance@postgresql.org
 Sent: Friday, 18 July, 2008 4:02:37 PM
 Subject: [PERFORM] Mailing list hacked by spammer?
 
 I've never gotten a single spam from the Postgres mailing list ... until 
 today.  
 A Chinese company selling consumer products is using this list.  I have my 
 filters set to automatically trust this list because it has been so reliable 
 until now.  It would be really, really unfortunate if this list fell to the 
 spammers.
 
 Craig
 
 -- 
 Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-performance



  __
Not happy with your email address?.
Get the one you really want - millions of new email addresses available now at 
Yahoo! http://uk.docs.yahoo.com/ymail/new.html

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Mailing list hacked by spammer?

2008-07-18 Thread Glyn Astill
 Glyn Astill wrote:

  Most likely just a forged header or something, hardly hacked
  though is it.
 
 Yes, hack is the correct term.  The bad guys have hacked into the major email 
 systems, including gmail, which was the origin of this spam:
 
   http://www.theregister.co.uk/2008/02/25/gmail_captcha_crack/
 
   I think you need to do some training:
  http://www2.b3ta.com/bigquiz/hacker-or-spacker/
 
 Sending a link to a web site that plays loud rap music is not a friendly way 
 to 
 make your point.
 
 Craig
 

Whatever. I see you clicked on the link then, even though it came from a 
'hacked' mailing list :-)

weren't you the chap that couldn't figure out how to use the slony tools and 
threw a wobbler at the developers ...



  __
Not happy with your email address?.
Get the one you really want - millions of new email addresses available now at 
Yahoo! http://uk.docs.yahoo.com/ymail/new.html

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Adaptec 5805 SAS Raid

2008-03-14 Thread Glyn Astill
Any of you chaps used this controller?



  ___ 
Rise to the challenge for Sport Relief with Yahoo! For Good  

http://uk.promotions.yahoo.com/forgood/


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Recomendations on raid controllers raid 1+0

2008-03-13 Thread Glyn Astill
Hi chaps,

I'm looking at switching out the perc5i (lsi megaraid) cards from our
Dell 2950s for something else as they're crap at raid 10.

Thing is I'm not entirely sure where to start, we're using 6 SAS
drives and also need a bbu cache. The perc5i has 256mb which I'm sure
would be fine for us.

We're using debian etch 64 bit.

What cards have you lot had success with?

Any tips would be appreciated.


  ___ 
Rise to the challenge for Sport Relief with Yahoo! For Good  

http://uk.promotions.yahoo.com/forgood/


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Recomendations on raid controllers raid 1+0

2008-03-13 Thread Glyn Astill
Bonie++ benchmarks below.

I believe the the Perc 5/i Raid 10 mode is actually a span of mirrors, rather 
than the expected stripe of mirrors we should expect from 1+0, and that this is 
the reason for the shitty performance.


RAID 5
==
Version  1.03   --Sequential Output-- --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
way5a32096M 55354  97 201375  59 109586  23 59934  97 427541  33 767.9  
 1
--Sequential Create-- Random Create
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
 16 + +++ + +++ + +++ + +++ + +++ + +++
way5a,32096M,55354,97,201375,59,109586,23,59934,97,427541,33,767.9,1,16,+,+++,+,+++,+,+++,+,+++,+,+++,+,+++

RAID 10
===
Version  1.03   --Sequential Output-- --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
Way5a32096M 53479  99 131640  33 66718  10 58225  95 339287  25 699.1   
1
--Sequential Create-- Random Create
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
 16 + +++ + +++ + +++ + +++ + +++ + +++
Way5a,32096M,53479,99,131640,33,66718,10,58225,95,339287,25,699.1,1,16,+,+++,+,+++,+,+++,+,+++,+,+++,+,+++


- Original Message 
 From: Guillaume Smet [EMAIL PROTECTED]
 To: Glyn Astill [EMAIL PROTECTED]
 Cc: pgsql-performance@postgresql.org
 Sent: Thursday, 13 March, 2008 3:58:41 PM
 Subject: Re: [PERFORM] Recomendations on raid controllers raid 1+0
 
 Glyn,
 
 On Thu, Mar 13, 2008 at 1:33 PM, Glyn Astill wrote:
  I'm looking at switching out the perc5i (lsi megaraid) cards from our
  Dell 2950s for something else as they're crap at raid 10.
 
 Do you have numbers? Perc 5/i cards perform quite well for us (we have
 a 8 disks RAID 10 in a 2900 server with the traditional Perc 5/i).
 
 --
 Guillaume


  ___ 
Rise to the challenge for Sport Relief with Yahoo! For Good  

http://uk.promotions.yahoo.com/forgood/


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance