da...@lang.hm writes:
> since there is not a pre-parsed interface for queries, it may make sense to
> setup a way to have the query pre-parsed, but not pre-planned for cases like
> this.
What would be more interesting would be to have plans that take into account
the outlier values and have alter
On Tue, 30 Dec 2008, Tom Lane wrote:
Scott Carey writes:
I have also had a case where one query would take a couple hundred ms to
parse, but was fairly fast to plan and execute (1/3 the parse cost) -- yet
another case where a prepared statement that re-plans each execution would be
helpful
Hi Tom,
On Tue, Dec 30, 2008 at 3:02 PM, Tom Lane wrote:
> The point of a prepared statement IMHO is to do the planning only once.
> There's necessarily a tradeoff between that and having a plan that's
> perfectly adapted to specific parameter values.
I agree, and normally it wouldn't be an issu
Scott Carey writes:
> I have also had a case where one query would take a couple hundred ms to
> parse, but was fairly fast to plan and execute (1/3 the parse cost) -- yet
> another case where a prepared statement that re-plans each execution would be
> helpful. At least you can prevent SQL i
On Tue, Dec 30, 2008 at 1:09 PM, Scott Carey wrote:
> There is no way to force Postgres to re-plan a prepared statement. In many
> cases, this would be a hugely beneficial feature (perhaps part of the
> definition of the statement?).
>
> I have had similar issues, and had to code the applicatio
There is no way to force Postgres to re-plan a prepared statement. In many
cases, this would be a hugely beneficial feature (perhaps part of the
definition of the statement?).
I have had similar issues, and had to code the application to prevent SQL
injection (Postgres $ quotes and other stuff
Hi Scott,
On Tue, Dec 30, 2008 at 12:09 PM, Scott Marlowe wrote:
> On Tue, Dec 30, 2008 at 12:42 PM, Merlin Moncure wrote:
>> On Tue, Dec 30, 2008 at 1:59 PM, bricklen wrote:
>>> Hi, I am re-posting my question here after trying to find a solution
>>> in the PHP pgsql list with no luck.
>>>
>>>
On Tue, Dec 30, 2008 at 12:42 PM, Merlin Moncure wrote:
> On Tue, Dec 30, 2008 at 1:59 PM, bricklen wrote:
>> Hi, I am re-posting my question here after trying to find a solution
>> in the PHP pgsql list with no luck.
>>
>> I am experiencing some performance issues that I think are stemming
>> fr
Hi Merlin,
On Tue, Dec 30, 2008 at 11:42 AM, Merlin Moncure wrote:
> On Tue, Dec 30, 2008 at 1:59 PM, bricklen wrote:
>> Hi, I am re-posting my question here after trying to find a solution
>> in the PHP pgsql list with no luck.
>>
>> I am experiencing some performance issues that I think are st
On Tue, Dec 30, 2008 at 1:59 PM, bricklen wrote:
> Hi, I am re-posting my question here after trying to find a solution
> in the PHP pgsql list with no luck.
>
> I am experiencing some performance issues that I think are stemming
> from prepared statements. I have a pretty simple query:
> -- bad p
Hi, I am re-posting my question here after trying to find a solution
in the PHP pgsql list with no luck.
I am experiencing some performance issues that I think are stemming
from prepared statements. I have a pretty simple query:
SELECT cl.idOffer,cl.idaffiliate ,cl.subid,cl.datetime
FROM click AS
Guillaume Lelarge writes:
> Laszlo Nagy a écrit :
>> We have serveral table where the index size is much bigger than the
>> table size.
>> ...
>> Vacuuming a table does not rebuild the indexes, am I right?
> Neither VACUUM nor VACUUM FULL rebuild the indexes. CLUSTER and REINDEX do.
In fact, VAC
On Tue, Dec 30, 2008 at 5:41 AM, Anton Bogdanovitch
wrote:
> I have to insert rows to table with 95% primary key unique_violation.
If you're inserting a lot of rows at once, I think you're probably
better off loading all of the data into a side table that does not
have a primary key, and then wri
On Tue, Dec 30, 2008 at 4:05 AM, Laszlo Nagy wrote:
> We have serveral table where the index size is much bigger than the table
> size.
You'll usually get index bloat in roughly the same measure that you
get table bloat. If you always (auto)vacuum regularly, then the
amount of bloat in your inde
I have to insert rows to table with 95% primary key unique_violation.
I've tested 2 examples below:
1)
BEGIN
INSERT INTO main (name, created) VALUES (i_name, CURRENT_TIMESTAMP
AT TIME ZONE 'GMT');
EXCEPTION WHEN UNIQUE_VIOLATION THEN
RETURN 'error: already exists';
END;
RETURN 'ok: s
Laszlo Nagy a écrit :
> We have serveral table where the index size is much bigger than the
> table size.
>
> Example:
>
> select count(*) from product_price -- 2234244
>
> Table size: 400 MB
> Index size: 600 MB
>
> After executing "reindex table product_price", index size reduced to 269MB.
>
We have serveral table where the index size is much bigger than the
table size.
Example:
select count(*) from product_price -- 2234244
Table size: 400 MB
Index size: 600 MB
After executing "reindex table product_price", index size reduced to 269MB.
I believe this affects performance.
Vacuum
17 matches
Mail list logo