Hello,
I'm experiencing a strange issue. I have a table with around 11 million
records (11471762 to be exact), storing login attempts to a web site.
Thanks to the index I have created on username, looking into that table
by username is very fast:
db=# EXPLAIN ANALYZE
SELECT
*
FROM
Kouber Saparev wrote:
db=# EXPLAIN ANALYZE
SELECT
*
FROM
login_attempt
WHERE
username='kouber'
ORDER BY
login_attempt_sid DESC;
QUERY PLAN
On Mon, Feb 23, 2009 at 7:26 AM, Kouber Saparev kou...@saparev.com wrote:
Now, recently I have altered some of the default parameters in order to get
as much as possible out of the hardware - 12 GB of RAM, 8 processors. So, I
guess I have done something wrong, thus the planner is taking that
Robert Haas robertmh...@gmail.com writes:
If you left seq_page_cost (which isn't mentioned here) at the default
value but reduced random_page_cost to 0.1, then you have
random_page_cost seq_page_cost. That's probably Bad.
... well, it's certainly going to push the planner to believe
Richard Huxton wrote:
Since it's expecting 7914 rows for kouber it thinks it will find the
20 rows you want fairly quickly by just looking backward through the
login_attempt_pkey index.
Try increasing the stats on the username column.
ALTER TABLE login_attempt ALTER COLUMN username SET
Kouber Saparev kou...@saparev.com writes:
Now the planner believes there're 910 rows, which is a bit closer to the
real data:
swing=# select avg(length) from (select username, count(*) as length
from login_attempt group by username) as freq;
avg
--
El Sat, 21 Feb 2009 21:04:49 -0500
Jonah H. Harris jonah.har...@gmail.com escribió:
On Fri, Feb 20, 2009 at 8:40 PM, Denis Lussier
denis.luss...@enterprisedb.com wrote:
Hi all,
As the author of BenchmarkSQL and the founder of EnterpriseDB I
can assure you that BenchmarkSQL was
Tom Lane wrote:
Robert Haas robertmh...@gmail.com writes:
If you left seq_page_cost (which isn't mentioned here) at the default
value but reduced random_page_cost to 0.1, then you have
random_page_cost seq_page_cost. That's probably Bad.
... well, it's certainly going to push the planner to
[note: sending a message that's been sitting in 'drafts' since last week]
Summary: C client and large-object API python both send bits in
reasonable time, but I suspect there's still room for improvement in
libpq over TCP: I'm suspicious of the 6x difference. Detailed analysis
will probably find
On Thu, Feb 19, 2009 at 02:09:04PM +0100, PFC wrote:
python w/ psycopg (or psycopg2), which wraps libpq. Same results w/
either version.
I've seen psycopg2 saturate a 100 Mbps ethernet connection (direct
connection with crossover cable) between postgres server and client during
On Fri, 20 Feb 2009, David Rees wrote:
On Fri, Feb 20, 2009 at 1:34 PM, Battle Mage battlem...@gmail.com wrote:
The amount of tps almost doubled, which is good, but i'm worried about the
load. For my application, a load increase is bad and I'd like to keep it
just like in 8.2.6 (a load
Ross J. Reedstrom reeds...@rice.edu writes:
Summary: C client and large-object API python both send bits in
reasonable time, but I suspect there's still room for improvement in
libpq over TCP: I'm suspicious of the 6x difference. Detailed analysis
will probably find it's all down to memory
Hello,
I am doing a performance comparison between running
Jenahttp://jena.sourceforge.net/with MySQL and Postgres. I used the
8.3-community version of Postgres and
MySQL 5.0.67. I have run several queries to both MySQL and Postgres and all
of them took similar amount of time to execute except
Farhan Husain russ...@gmail.com writes:
I can provide any other information needed and also the data if anyone
wants.
What did the query plans look like in both databases?
In Postgres you can get the query plan with
EXPLAIN ANALYZE select ...
You can leave out the ANALYZE if you can't wait
On Mon, Feb 23, 2009 at 4:16 PM, Farhan Husain russ...@gmail.com wrote:
Hello,
I am doing a performance comparison between running Jena with MySQL and
Postgres. I used the 8.3-community version of Postgres and MySQL 5.0.67. I
have run several queries to both MySQL and Postgres and all of them
On Mon, Feb 23, 2009 at 5:27 PM, Scott Marlowe scott.marl...@gmail.comwrote:
On Mon, Feb 23, 2009 at 4:16 PM, Farhan Husain russ...@gmail.com wrote:
Hello,
I am doing a performance comparison between running Jena with MySQL and
Postgres. I used the 8.3-community version of Postgres and
On Tue, Feb 24, 2009 at 12:27 AM, Scott Marlowe scott.marl...@gmail.com wrote:
If it's not C then string compares are going to probably need special
indexes to work the way you expect them. (varchar pattern ops). Look
here for more information:
On Mon, Feb 23, 2009 at 2:02 PM, da...@lang.hm wrote:
On Fri, 20 Feb 2009, David Rees wrote:
On Fri, Feb 20, 2009 at 1:34 PM, Battle Mage battlem...@gmail.com wrote:
The amount of tps almost doubled, which is good, but i'm worried about
the
load. For my application, a load increase is bad
On Mon, Feb 23, 2009 at 5:27 PM, Gregory Stark st...@enterprisedb.comwrote:
Farhan Husain russ...@gmail.com writes:
I can provide any other information needed and also the data if anyone
wants.
What did the query plans look like in both databases?
In Postgres you can get the query plan
On Mon, Feb 23, 2009 at 4:33 PM, Guillaume Smet
guillaume.s...@gmail.com wrote:
On Tue, Feb 24, 2009 at 12:27 AM, Scott Marlowe scott.marl...@gmail.com
wrote:
If it's not C then string compares are going to probably need special
indexes to work the way you expect them. (varchar pattern ops).
On Mon, Feb 23, 2009 at 4:35 PM, Farhan Husain russ...@gmail.com wrote:
On Mon, Feb 23, 2009 at 5:27 PM, Gregory Stark st...@enterprisedb.com
wrote:
Farhan Husain russ...@gmail.com writes:
I can provide any other information needed and also the data if anyone
wants.
What did the query
On Mon, Feb 23, 2009 at 5:27 PM, Gregory Stark st...@enterprisedb.comwrote:
Farhan Husain russ...@gmail.com writes:
I can provide any other information needed and also the data if anyone
wants.
What did the query plans look like in both databases?
In Postgres you can get the query plan
On Mon, Feb 23, 2009 at 6:24 PM, Farhan Husain russ...@gmail.com wrote:
This sort here:
- Sort (cost=565372.46..568084.16 rows=1084680 width=74) (actual
time=5410606.604..5410606.628 rows=31 loops=1)
Sort Key: a1.subj
Sort Method: quicksort Memory: 489474kB
Scott Marlowe scott.marl...@gmail.com writes:
On Mon, Feb 23, 2009 at 6:24 PM, Farhan Husain russ...@gmail.com wrote:
This sort here:
- Sort (cost=565372.46..568084.16 rows=1084680 width=74) (actual
time=5410606.604..5410606.628 rows=31 loops=1)
Sort Key: a1.subj
Sort Method:
Farhan Husain russ...@gmail.com writes:
Here is the output:
I see a couple of things going on here:
* The planner is choosing to use sort-and-mergejoin for the second join.
This requires sorting all of jena_g1t1_stmt. If it had accurately
estimated the output size of the first join (ie 30 rows
On Mon, Feb 23, 2009 at 5:27 PM, Gregory Stark st...@enterprisedb.comwrote:
Farhan Husain russ...@gmail.com writes:
I can provide any other information needed and also the data if anyone
wants.
What did the query plans look like in both databases?
In Postgres you can get the query plan
I am doing a performance comparison between running Jena with MySQL and
Postgres. I used the 8.3-community version of Postgres and MySQL 5.0.67. I
have run several queries to both MySQL and Postgres and all of them took
similar amount of time to execute except one. For the following query to a
27 matches
Mail list logo