On Tue, 14 Sep 2004, Stephen Crowley wrote:
Problem solved.. I set the fetchSize to a reasonable value instead of
the default of unlimited in the PreparedStatement and now the query
is . After some searching it seeems this is a common problem, would it
make sense to change the default
Thanks for the explanation. So what sort of changes need to be made to
the client/server protocol to fix this problem?
On Thu, 23 Sep 2004 18:22:15 -0500 (EST), Kris Jurka [EMAIL PROTECTED] wrote:
On Tue, 14 Sep 2004, Stephen Crowley wrote:
Problem solved.. I set the fetchSize to a
On Thu, 23 Sep 2004, Stephen Crowley wrote:
Thanks for the explanation. So what sort of changes need to be made to
the client/server protocol to fix this problem?
The problem is that there is no way to indicate why you are using a
particular statement in the extended query protocol. For
Stephen,
- Seq Scan on island_history (cost=0.00..417867.13 rows=292274
width=83) (actual time=346759.000..346759.000 rows=10 loops=1)
Take a look at your row comparisons. When was the last time you ran ANALYZE?
--
Josh Berkus
Aglio Database Solutions
San Francisco
On Thu, 16 Sep 2004 20:51:11 -0500, Stephen Crowley
[EMAIL PROTECTED] wrote:
explain analyze select * from history where date='2004-09-07' and
stock='ORCL' LIMIT 10;
- Index Scan using island_history_date_stock_time on
island_history (cost=0.00..183099.72 rows=102166 width=83) (actual
Ok.. now I ran VACUUM FULL' and things seem to be working as they should..
explain analyze select * from history where date='2004-09-07' and stock='MSFT';
Seq Scan on island_history (cost=0.00..275359.13 rows=292274
width=83) (actual time=50.000..411683.000 rows=265632 loops=1)
Filter:
When I set enable_seqscan to OFF and force everything to use the index
every stock I query returns within 100ms, but turn seqscan back ON and
its back up to taking several minutes for non-index using plans.
Any ideas?
--Stephen
Try increasing your statistics target and re-running analyze. Try say
Problem solved.. I set the fetchSize to a reasonable value instead of
the default of unlimited in the PreparedStatement and now the query
is . After some searching it seeems this is a common problem, would it
make sense to change the default value to something other than 0 in
the JDBC driver?
If
Hi, Stephen,
On Mon, 13 Sep 2004 19:51:22 -0500
Stephen Crowley [EMAIL PROTECTED] wrote:
Does postgres cache the entire result set before it begins returning
data to the client?
I have a table with ~8 million rows and I am executing a query which
should return about ~800,000 rows. The
I have a table with ~8 million rows and I am executing a query which
should return about ~800,000 rows. The problem is that as soon as I
execute the query it absolutely kills my machine and begins swapping
for 5 or 6 minutes before it begins returning results. Is postgres
trying to load the whole
Stephen Crowley [EMAIL PROTECTED] writes:
Does postgres cache the entire result set before it begins returning
data to the client?
The backend doesn't, but libpq does, and I think JDBC does too.
I'd recommend using a cursor so you can FETCH a reasonable number of
rows at a time.
Also, why
On Mon, 2004-09-13 at 20:51, Stephen Crowley wrote:
Does postgres cache the entire result set before it begins returning
data to the client?
Sometimes you need to be careful as to how the clients treat the data.
For example psql will resize columns width on the length (width) of the
data
On Mon, 13 Sep 2004 21:11:07 -0400, Tom Lane [EMAIL PROTECTED] wrote:
Stephen Crowley [EMAIL PROTECTED] writes:
Does postgres cache the entire result set before it begins returning
data to the client?
The backend doesn't, but libpq does, and I think JDBC does too.
I'd recommend using a
Stephen Crowley [EMAIL PROTECTED] writes:
On Mon, 13 Sep 2004 21:11:07 -0400, Tom Lane [EMAIL PROTECTED] wrote:
Stephen Crowley [EMAIL PROTECTED] writes:
Does postgres cache the entire result set before it begins returning
data to the client?
The backend doesn't, but libpq does, and I think
14 matches
Mail list logo