On Thu, 2011-07-07 at 15:34 +0200, vincent dephily wrote:
Hi,
I have a delete query taking 7.2G of ram (and counting) but I do not
understant why so much memory is necessary. The server has 12G, and
I'm afraid it'll go into swap. Using postgres 8.3.14.
I'm purging some old data from table
On 8 July 2011 10:44, Vincent de Phily
vincent.deph...@mobile-devices.fr wrote:
On Friday 08 July 2011 10:05:47 Dean Rasheed wrote:
On Thu, 2011-07-07 at 15:34 +0200, vincent dephily wrote:
Hi,
I have a delete query taking 7.2G of ram (and counting) but I do not
understant why so much
On Fri, Jul 8, 2011 at 12:48 PM, Dean Rasheed dean.a.rash...@gmail.com wrote:
Yes, it's the same issue that affects deferrable PK and FK
constraints, but even non-deferrable FKs use AFTER ROW triggers that
suffer from this problem. These triggers don't show up in a \d from
psql, but they are
Hi,
i am using libpq library and postgresql 8.4 for my linux application running on
ARM with 256 MB. I am just doing:
PQconnectdb();
PQexec(INSERT INTO table1 ); (0.009661 sec.)
PQexec(INSERT INTO table1 ); (0.004208 sec.)
PQexec(INSERT INTO table2 ); (0.007352 sec.)
PQexec(INSERT
On 05.07.2011 16:35, Shaun Thomas wrote:
I'd say it's probably safe enough these days. But it's also one of those
exclusive selling points they're using right now to garner EDB
customers. So I doubt it'll be released any time *soon*, though may make
it eventually.
I doubt the community would
On Fri, Jul 8, 2011 at 4:35 AM, Dean Rasheed dean.a.rash...@gmail.comwrote:
On Thu, 2011-07-07 at 15:34 +0200, vincent dephily wrote:
Hi,
I have a delete query taking 7.2G of ram (and counting) but I do not
understant why so much memory is necessary. The server has 12G, and
I'm afraid
On Fri, Jul 8, 2011 at 15:34, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
On 05.07.2011 16:35, Shaun Thomas wrote:
I'd say it's probably safe enough these days. But it's also one of those
exclusive selling points they're using right now to garner EDB
customers. So I doubt
That's a great point about autovacuum_vacuum_scale_factor; I will lower the
value there to 0.2 and see if autovacuum starts doing a better job. (We use
Postgresql 8.3.5 currently, by the way.)
Thanks for the notes and the useful page link on vacuum full. We are
running vacuum full primarily
On 07/08/2011 12:46 PM, D C wrote:
That said, it sounds like if we switched to daily trucates of each
table (they can be purged entirely each day) rather than delete
froms, then there truly would not be any reason to use vacuum
full. Does that sound plausible?
That's exactly right. If you
Hello folks,
This isn't really a problem, so much as an observation of just how much
the internals have changed over the years. We've got an older version
we're hoping to upgrade soon, and a developer asked me to optimize this
today:
SELECT order_id
FROM order
WHERE order_id =
I have run into issue where the query optimizer is choosing the wrong
execution plan when I'm trying to join two large tables that have been
partitioned. I would really appreciate it if someone could help me out
this. I don't know whether I've found a bug in the optimizer, or whether
there is
Does anyone have any suggestions for my problem? (I have to wonder if
I'm somehow just not getting peoples attention or what. This is my
second question this week on a public mailing list that has gotten
exactly 0 replies)
Jonathan
On 7/5/2011 8:18 PM, Jonathan wrote:
I have a query that
On Fri, 2011-07-08 at 04:23 -0700, Sergio Mayoral wrote:
this must be something with the parser stage and since i am doing
every time the same queries, I would like to know if there is a way to
cache these queries in order to speed up the first INSERT.
I doubt it's the parser.
Seeing as it's
Hello
Is impossible to help you without more detailed info about your problems,
we have to see a execution plan, we have to see slow query
Regards
Pavel Stehule
2011/7/9 Jonathan jonat...@kc8onw.net:
Does anyone have any suggestions for my problem? (I have to wonder if I'm
somehow just not
Hello
sorry, I didn't see a link on privatepastebin
There is problem in LIMIT, because query without LIMIT returns only a
few lines more than query with LIMIT. You can try to materialize query
without LIMIT and then to use LIMIT like
SELECT * FROM (your query without limit OFFSET 0) x LIMIT 30;
15 matches
Mail list logo