The execution time has not improved. I am going to increase the
shared_buffers now keeping the work_mem same.
Have you performed a vacuum analyze?
--
regards
Claus
When lenity and cruelty play for a kingdom,
the gentler gamester is the soonest winner.
Shakespeare
--
Sent via
Thu, 26 Feb 2009 09:00:07 +0100 -n
Claus Guttesen kome...@gmail.com írta:
The execution time has not improved. I am going to increase the
shared_buffers now keeping the work_mem same.
Have you performed a vacuum analyze?
and reindex
--
Üdvözlettel,
Gábriel Ákos
-=E-Mail
On Wed, Feb 25, 2009 at 4:10 PM, Kevin Grittner kevin.gritt...@wicourts.gov
wrote:
Farhan Husain russ...@gmail.com wrote:
The machine postgres is running on has 4 GB of RAM.
In addition to the other suggestions, you should be sure that
effective_cache_size is set to a reasonable value,
On Wed, Feb 25, 2009 at 6:07 PM, Scott Carey sc...@richrelevance.comwrote:
I will second Kevin’s suggestion. Unless you think you will have more
than a few dozen concurrent queries, start with work_mem around 32MB.
For the query here, a very large work_mem might help it hash join depending
Farhan Husain russ...@gmail.com wrote:
Thanks a lot Scott! I think that was the problem. I just changed the
default statistics target to 50 and ran explain. The plan changed
and I ran explain analyze. Now it takes a fraction of a second!
Yeah, the default of 10 has been too low. In 8.4 it
On Thu, Feb 26, 2009 at 12:10 PM, Steve Clark scl...@netwolves.com wrote:
Can this be set in the postgresql.conf file?
default_statistics_target = 50
Yep. It will take affect after a reload and after the current
connection has been reset.
If you want to you also set a default for a database
Kevin Grittner wrote:
Farhan Husain russ...@gmail.com wrote:
Thanks a lot Scott! I think that was the problem. I just changed the
default statistics target to 50 and ran explain. The plan changed
and I ran explain analyze. Now it takes a fraction of a second!
Yeah, the default of 10 has been
On Wed, Feb 25, 2009 at 12:49 PM, Robert Haas robertmh...@gmail.com wrote:
You still haven't answered the work_mem question, and you probably
want to copy the list, rather than just sending this to me.
...Robert
On Wed, Feb 25, 2009 at 1:34 PM, Farhan Husain russ...@gmail.com wrote:
Just start up psql and type:
show work_mem;
(You could look in the config file too I suppose.)
...Robert
On Wed, Feb 25, 2009 at 1:53 PM, Farhan Husain russ...@gmail.com wrote:
On Wed, Feb 25, 2009 at 12:49 PM, Robert Haas robertmh...@gmail.com wrote:
You still haven't answered the
On Wed, Feb 25, 2009 at 12:58 PM, Robert Haas robertmh...@gmail.com wrote:
Just start up psql and type:
show work_mem;
(You could look in the config file too I suppose.)
...Robert
On Wed, Feb 25, 2009 at 1:53 PM, Farhan Husain russ...@gmail.com wrote:
On Wed, Feb 25, 2009 at 12:49
Please note that this (1792MB) is the highest that I could set for work_mem.
Yeah, that's almost certainly part of your problem.
You need to make that number MUCH smaller. You probably want a value
like 1MB or 5MB or maybe if you have really a lot of memory 20MB.
That's insanely high.
On Wed, Feb 25, 2009 at 12:05 PM, Farhan Husain russ...@gmail.com wrote:
On Wed, Feb 25, 2009 at 12:58 PM, Robert Haas robertmh...@gmail.com wrote:
Just start up psql and type:
show work_mem;
I did it, it does not show anything.
Did you remember the ; symbol?
Here is what I have got
On Wed, Feb 25, 2009 at 1:52 PM, Robert Haas robertmh...@gmail.com wrote:
Please note that this (1792MB) is the highest that I could set for
work_mem.
Yeah, that's almost certainly part of your problem.
You need to make that number MUCH smaller. You probably want a value
like 1MB or 5MB
On Wed, Feb 25, 2009 at 3:44 PM, Farhan Husain russ...@gmail.com wrote:
Initially, it was the default value (32MB). Later I played with that value
thinking that it might improve the performance. But all the values resulted
in same amount of time.
Well, if you set it back to what we consider to
On Wed, Feb 25, 2009 at 3:30 PM, Robert Haas robertmh...@gmail.com wrote:
On Wed, Feb 25, 2009 at 3:44 PM, Farhan Husain russ...@gmail.com wrote:
Initially, it was the default value (32MB). Later I played with that
value
thinking that it might improve the performance. But all the values
On Wed, Feb 25, 2009 at 2:32 PM, Farhan Husain russ...@gmail.com wrote:
On Wed, Feb 25, 2009 at 3:30 PM, Robert Haas robertmh...@gmail.com wrote:
On Wed, Feb 25, 2009 at 3:44 PM, Farhan Husain russ...@gmail.com wrote:
Initially, it was the default value (32MB). Later I played with that
On Wed, Feb 25, 2009 at 4:32 PM, Farhan Husain russ...@gmail.com wrote:
On Wed, Feb 25, 2009 at 3:30 PM, Robert Haas robertmh...@gmail.com wrote:
On Wed, Feb 25, 2009 at 3:44 PM, Farhan Husain russ...@gmail.com wrote:
Initially, it was the default value (32MB). Later I played with that
value
On Wed, Feb 25, 2009 at 3:35 PM, Scott Marlowe scott.marl...@gmail.comwrote:
On Wed, Feb 25, 2009 at 2:32 PM, Farhan Husain russ...@gmail.com wrote:
On Wed, Feb 25, 2009 at 3:30 PM, Robert Haas robertmh...@gmail.com
wrote:
On Wed, Feb 25, 2009 at 3:44 PM, Farhan Husain russ...@gmail.com
shared_buffers = 32MB # min 128kB or
max_connections*16kB
That's REALLY small for pgsql. Assuming your machine has at least 1G
of ram, I'd set it to 128M to 256M as a minimum.
As I wrote in a previous email, I had the value set to 1792MB (the highest I
could set) and
It was only after I got this high execution time when I started to look into
the configuration file and change those values. I tried several combinations
in which all those values were higher than the default values. I got no
improvement in runtime. The machine postgres is running on has 4 GB of
On Wed, Feb 25, 2009 at 2:38 PM, Farhan Husain russ...@gmail.com wrote:
On Wed, Feb 25, 2009 at 3:35 PM, Scott Marlowe scott.marl...@gmail.com
wrote:
On Wed, Feb 25, 2009 at 2:32 PM, Farhan Husain russ...@gmail.com wrote:
On Wed, Feb 25, 2009 at 3:30 PM, Robert Haas robertmh...@gmail.com
On Wed, Feb 25, 2009 at 3:55 PM, Scott Marlowe scott.marl...@gmail.comwrote:
On Wed, Feb 25, 2009 at 2:38 PM, Farhan Husain russ...@gmail.com wrote:
On Wed, Feb 25, 2009 at 3:35 PM, Scott Marlowe scott.marl...@gmail.com
wrote:
On Wed, Feb 25, 2009 at 2:32 PM, Farhan Husain
Wed, 25 Feb 2009 15:43:49 -0600 -n
Farhan Husain russ...@gmail.com írta:
OK, you have two options:
1. Learn to read carefully, and differentiate between work_mem and
shared_buffers options. Lower work_mem and rise shared_buffers as
others wrote.
2. Leave Postgresql alone and go for Oracle or
I am trying to find the reason of the problem so going to Oracle or
something else is not the solution. I tried with several combinations of
those parameters before posting the problem here. I have read
http://www.postgresql.org/docs/current/interactive/runtime-config-resource.htmlbefore
and I
Farhan Husain russ...@gmail.com wrote:
The machine postgres is running on has 4 GB of RAM.
In addition to the other suggestions, you should be sure that
effective_cache_size is set to a reasonable value, which would
probably be somewhere in the neighborhood of '3GB'. This doesn't
affect
On Wed, Feb 25, 2009 at 4:10 PM, Kevin Grittner kevin.gritt...@wicourts.gov
wrote:
Farhan Husain russ...@gmail.com wrote:
The machine postgres is running on has 4 GB of RAM.
In addition to the other suggestions, you should be sure that
effective_cache_size is set to a reasonable value,
Farhan Husain russ...@gmail.com wrote:
Kevin Grittner kevin.gritt...@wicourts.gov wrote:
Farhan Husain russ...@gmail.com wrote:
The machine postgres is running on has 4 GB of RAM.
In addition to the other suggestions, you should be sure that
effective_cache_size is set to a reasonable
I will second Kevin's suggestion. Unless you think you will have more than a
few dozen concurrent queries, start with work_mem around 32MB.
For the query here, a very large work_mem might help it hash join depending on
the data... But that's not the real problem here.
The real problem is that
Here is the latest output:
ingentadb=# EXPLAIN ANALYZE select A0.Subj, A2.Obj From jena_g1t1_stmt A0,
jena_g1t1_stmt A1, jena_g1t1_stmt A2 Where
A0.Prop='Uv::http://prismstandard.org/namespaces/1.2/basic/isPartOf' AND
The result set should have 31 rows, that is correct.
On Mon, Feb 23, 2009 at 7:53 PM, Scott Marlowe scott.marl...@gmail.comwrote:
On Mon, Feb 23, 2009 at 6:24 PM, Farhan Husain russ...@gmail.com wrote:
This sort here:
- Sort (cost=565372.46..568084.16 rows=1084680 width=74) (actual
On Tue, Feb 24, 2009 at 1:28 AM, Claus Guttesen kome...@gmail.com wrote:
I am doing a performance comparison between running Jena with MySQL and
Postgres. I used the 8.3-community version of Postgres and MySQL 5.0.67.
I
have run several queries to both MySQL and Postgres and all of them
Query:
select A0.Subj, A2.Obj From jena_g1t1_stmt A0, jena_g1t1_stmt A1,
jena_g1t1_stmt A2 Where
A0.Prop='Uv::http://prismstandard.org/namespaces/1.2/basic/isPartOf' AND
A0.Obj='Uv::http://www.utdallas.edu/~farhan.husain/IngentaConnect/issue1_1'
AND A0.GraphID=1 AND A0.Subj=A1.Subj
Can you please elaborate a bit?
I thought that A0.Prop would ignore the composite index created on the
columns subj and prop but this does not seem to be the case.
Yeah, I think you're barking up the wrong tree here. I think Tom had
the correct diagnosis - what do you get from show work_mem?
Hello,
I am doing a performance comparison between running
Jenahttp://jena.sourceforge.net/with MySQL and Postgres. I used the
8.3-community version of Postgres and
MySQL 5.0.67. I have run several queries to both MySQL and Postgres and all
of them took similar amount of time to execute except
Farhan Husain russ...@gmail.com writes:
I can provide any other information needed and also the data if anyone
wants.
What did the query plans look like in both databases?
In Postgres you can get the query plan with
EXPLAIN ANALYZE select ...
You can leave out the ANALYZE if you can't wait
On Mon, Feb 23, 2009 at 4:16 PM, Farhan Husain russ...@gmail.com wrote:
Hello,
I am doing a performance comparison between running Jena with MySQL and
Postgres. I used the 8.3-community version of Postgres and MySQL 5.0.67. I
have run several queries to both MySQL and Postgres and all of them
On Mon, Feb 23, 2009 at 5:27 PM, Scott Marlowe scott.marl...@gmail.comwrote:
On Mon, Feb 23, 2009 at 4:16 PM, Farhan Husain russ...@gmail.com wrote:
Hello,
I am doing a performance comparison between running Jena with MySQL and
Postgres. I used the 8.3-community version of Postgres and
On Tue, Feb 24, 2009 at 12:27 AM, Scott Marlowe scott.marl...@gmail.com wrote:
If it's not C then string compares are going to probably need special
indexes to work the way you expect them. (varchar pattern ops). Look
here for more information:
On Mon, Feb 23, 2009 at 5:27 PM, Gregory Stark st...@enterprisedb.comwrote:
Farhan Husain russ...@gmail.com writes:
I can provide any other information needed and also the data if anyone
wants.
What did the query plans look like in both databases?
In Postgres you can get the query plan
On Mon, Feb 23, 2009 at 4:33 PM, Guillaume Smet
guillaume.s...@gmail.com wrote:
On Tue, Feb 24, 2009 at 12:27 AM, Scott Marlowe scott.marl...@gmail.com
wrote:
If it's not C then string compares are going to probably need special
indexes to work the way you expect them. (varchar pattern ops).
On Mon, Feb 23, 2009 at 4:35 PM, Farhan Husain russ...@gmail.com wrote:
On Mon, Feb 23, 2009 at 5:27 PM, Gregory Stark st...@enterprisedb.com
wrote:
Farhan Husain russ...@gmail.com writes:
I can provide any other information needed and also the data if anyone
wants.
What did the query
On Mon, Feb 23, 2009 at 5:27 PM, Gregory Stark st...@enterprisedb.comwrote:
Farhan Husain russ...@gmail.com writes:
I can provide any other information needed and also the data if anyone
wants.
What did the query plans look like in both databases?
In Postgres you can get the query plan
On Mon, Feb 23, 2009 at 6:24 PM, Farhan Husain russ...@gmail.com wrote:
This sort here:
- Sort (cost=565372.46..568084.16 rows=1084680 width=74) (actual
time=5410606.604..5410606.628 rows=31 loops=1)
Sort Key: a1.subj
Sort Method: quicksort Memory: 489474kB
Scott Marlowe scott.marl...@gmail.com writes:
On Mon, Feb 23, 2009 at 6:24 PM, Farhan Husain russ...@gmail.com wrote:
This sort here:
- Sort (cost=565372.46..568084.16 rows=1084680 width=74) (actual
time=5410606.604..5410606.628 rows=31 loops=1)
Sort Key: a1.subj
Sort Method:
Farhan Husain russ...@gmail.com writes:
Here is the output:
I see a couple of things going on here:
* The planner is choosing to use sort-and-mergejoin for the second join.
This requires sorting all of jena_g1t1_stmt. If it had accurately
estimated the output size of the first join (ie 30 rows
On Mon, Feb 23, 2009 at 5:27 PM, Gregory Stark st...@enterprisedb.comwrote:
Farhan Husain russ...@gmail.com writes:
I can provide any other information needed and also the data if anyone
wants.
What did the query plans look like in both databases?
In Postgres you can get the query plan
I am doing a performance comparison between running Jena with MySQL and
Postgres. I used the 8.3-community version of Postgres and MySQL 5.0.67. I
have run several queries to both MySQL and Postgres and all of them took
similar amount of time to execute except one. For the following query to a
47 matches
Mail list logo