Vivek Khera writes:
what raid card have you got?
2 3ware cards.
I believe both are 9550SX
i'm playing with an external enclosure
which has an areca sata raid in it and connects to the host via fibre
channel.
What is the OS? FreeBSD?
One of the reasons I stick with 3ware is that it is
Christo Du Preez <[EMAIL PROTECTED]> writes:
> Fast:
> "public";"layertype";"parentid";0.98797;4;2;"{4,1}";"{0.00902256,0.00300752}";"";-0.142857
> Slow:
> "public";"layertype";"parentid";0.00745157;4;7;"{300}";"{0.976155}";"{1,1,4,5,8,12}";0.92262
Well, those statistics are almost completely dif
Thanks Tom and Alvaro.
To follow up on this, I re-wrote and tweaked a number of queries (including
the one provided) to change "LEFT OUTER JOIN ... WHERE col IS NULL" clauses
to "WHERE col NOT IN (...)" clauses.
This has brought performance to an acceptable level on 8.2.
Thanks for your time,
S
On Jun 11, 2007, at 9:14 PM, Francisco Reyes wrote:
RAID card 1 with 8 drives. 7200 RPM SATA RAID10
RAID card 2 with 4 drives. 10K RPM SATA RAID10
what raid card have you got? i'm playing with an external enclosure
which has an areca sata raid in it and connects to the host via fibre
ch
Vladimir Stankovic wrote:
What I am hoping to see is NOT the same value for all the executions of
the same type of transaction (after some transient period). Instead, I'd
like to see that if I take appropriately-sized set of transactions I
will see at least steady-growth in transaction average
Fast:
"public";"layertype";"id";0;4;-1;"";"";"{1,442,508,575,641,708,774,840,907,973,1040}";0.95
"public";"layertype";"label";0;14;-0.971429;"{arch,bank,bench,canyon,gap,hill,hills,levee,mountain,mountains}";"{0.00300752,0.00300752,0.00300752,0.00300752,0.00300752,0.00300752,0.00300752,0.00300
Christo Du Preez <[EMAIL PROTECTED]> writes:
> Yes, I have just about tried every combination of vacuum on the
> database. Just to make 100% sure.
Well, there's something mighty wacko about that rowcount estimate;
even if you didn't have stats, the estimate for a simple equality
constraint oughtn'
Yes, I have just about tried every combination of vacuum on the
database. Just to make 100% sure.
Tom Lane wrote:
> Christo Du Preez <[EMAIL PROTECTED]> writes:
>
>> On my laptop the explain analyze looks like this:
>>
>
>
>> "Index Scan using fki_layertype_parentid on layertype (cost=
Hi Bill,
...
>
> However, you can get some measure of tracking my running VACUUM VERBOSE
> on a regular basis to see how well autovacuum is keeping up. There's
> no problem with running manual vacuum and autovacuum together, and you'll
> be able to gather _some_ information about how well autovac
In response to "Sabin Coanda" <[EMAIL PROTECTED]>:
> Hi there,
>
> Using explicitly VACUUM command give me the opportunity to fine tune my
> VACUUM scheduling parameters, after I analyze the log generated by VACUUM
> VERBOSE.
>
> On the other hand I'd like to use the auto-vacuum mechanism beca
Christo Du Preez <[EMAIL PROTECTED]> writes:
> On my laptop the explain analyze looks like this:
> "Index Scan using fki_layertype_parentid on layertype (cost=0.00..8.27
> rows=1 width=109)"
> " Index Cond: (parentid = 300)"
OK ...
> and on the problem server:
> "Seq Scan on layertype (cost=
Sabin Coanda wrote:
> Hi there,
>
> Using explicitly VACUUM command give me the opportunity to fine tune my
> VACUUM scheduling parameters, after I analyze the log generated by VACUUM
> VERBOSE.
>
> On the other hand I'd like to use the auto-vacuum mechanism because of its
> facilities. Unfort
Craig James wrote:
> Tyrrill, Ed wrote:
> > QUERY PLAN
> >
> >
>
> >
>
> > ---
> > Merge Left Join (cost=38725295.93..42505394.70 rows=
Hi there,
Using explicitly VACUUM command give me the opportunity to fine tune my
VACUUM scheduling parameters, after I analyze the log generated by VACUUM
VERBOSE.
On the other hand I'd like to use the auto-vacuum mechanism because of its
facilities. Unfortunately, after I made some initial e
Heikki,
Thanks for the response.
Heikki Linnakangas wrote:
Vladimir Stankovic wrote:
I'm running write-intensive, TPC-C like tests. The workload consist
of 150 to 200 thousand transactions. The performance varies
dramatically, between 5 and more than 9 hours (I don't have the exact
figure fo
On Tue, Jun 12, 2007 at 04:11:33PM +0200, Christo Du Preez wrote:
> The actual table I noticed the problem has a million rows and it still
> doesn't use indexing
Then please post an EXPLAIN ANALYZE of the query that is slow, along with the
table definition and indexes.
/* Steinar */
--
Homepage:
In response to Arnau <[EMAIL PROTECTED]>:
> Hi all,
>
>I have a server with 4GB of memory and I'm tweaking the PostgreSQL
> configuration. This server will be dedicated to run PostgreSQL so I'd
> like to dedicate as much as possible RAM to it.
>
>I have dedicated 1GB to shared_buffers
On Tue, Jun 12, 2007 at 03:32:40PM +0200, Christo Du Preez wrote:
> As an example I took a table with about 650 rows, having a parentid
> field with an index on parentid.
Try a bigger table. Using an index for only 650 rows is almost always
suboptimal, so it's no wonder the planner doesn't use the
> From: Christo Du Preez
> Sent: Tuesday, June 12, 2007 2:38 AM
>
> Where do I set the planner settings or are you reffering to
> settings in postgres.conf that may affect the planner?
>
Yes I'm reffering to settings in postgres.conf. I'm wondering if
enable_indexscan or something got turned o
Hi all,
I have a server with 4GB of memory and I'm tweaking the PostgreSQL
configuration. This server will be dedicated to run PostgreSQL so I'd
like to dedicate as much as possible RAM to it.
I have dedicated 1GB to shared_buffers (shared_buffers=131072) but
I'm not sure if this will be
Christo Du Preez wrote:
> The actual table I noticed the problem has a million rows and it still
> doesn't use indexing
So ANALYZE it.
--
Alvaro Herrera Developer, http://www.PostgreSQL.org/
"Amanece. (Ignacio Reyes)
El Cerr
The actual table I noticed the problem has a million rows and it still
doesn't use indexing
Reid Thompson wrote:
> try it with a table with 650K rows...
>
> On Tue, 2007-06-12 at 15:32 +0200, Christo Du Preez wrote:
>
>> Good day,
>>
>> I have noticed that my server never uses indexing. No matt
"Christo Du Preez" <[EMAIL PROTECTED]> writes:
> On my laptop the explain analyze looks like this:
>
> "Index Scan using fki_layertype_parentid on layertype (cost=0.00..8.27
> rows=1 width=109)"
> " Index Cond: (parentid = 300)"
That's not "explain analyze", that's just plain "explain".
--
On Jun 12, 2007, at 8:32 , Christo Du Preez wrote:
I have noticed that my server never uses indexing. No matter what I
do.
As an example I took a table with about 650 rows, having a parentid
field with an index on parentid.
EXPLAIN ANALYZE
SELECT *
FROM layertype
where parentid = 300;
T
try it with a table with 650K rows...
On Tue, 2007-06-12 at 15:32 +0200, Christo Du Preez wrote:
> Good day,
>
> I have noticed that my server never uses indexing. No matter what I do.
>
> As an example I took a table with about 650 rows, having a parentid
> field with an index on parentid.
>
>
Good day,
I have noticed that my server never uses indexing. No matter what I do.
As an example I took a table with about 650 rows, having a parentid
field with an index on parentid.
EXPLAIN ANALYZE
SELECT *
FROM layertype
where parentid = 300;
On my laptop the explain analyze looks like this
Where do I set the planner settings or are you reffering to settings in
postgres.conf that may affect the planner?
The one badly performing laptop is the same as mine (the fast one) and
the server is much more powerful.
Laptops: Intel Centrino Duo T2600 @ 2.16GHz, 1.98 GB RAM
Server: 2 xIntel Pe
27 matches
Mail list logo