On Mon, Nov 2, 2015 at 12:19 AM, Andrey Osenenko
wrote:
>
> It also looks like if there was a way to create a table with just primary
> key and add an index to it that indexes data from another table, it would
> work much, much faster since there would be very little to read from disk
> after inde
On 11/2/15 2:19 AM, Andrey Osenenko wrote:
It also looks like if there was a way to create a table with just
primary key and add an index to it that indexes data from another table,
it would work much, much faster since there would be very little to read
from disk after index lookup. But looks li
Guido Niewerth writes:
> And this is the execution plan. It looks like it does a slow sequential scan
> where it´s able to do an index scan:
> 2015-11-02 17:42:10 CET LOG: duration: 5195.673 ms plan:
> Query Text: SELECT NOT EXISTS( SELECT 1 FROM custom_data
> WHERE key = old.
I needed to set up the trigger function again, so here it is:
CREATE OR REPLACE FUNCTION public.fn_trigger_test ()
RETURNS trigger AS
$body$
DECLARE
start TIMESTAMP;
BEGIN
start := timeofday();
IF TG_OP = 'UPDATE' THEN
IF NOT EXISTS( SELECT key FROM custom_data WHERE ke
Guido Niewerth writes:
> As you can see there´s a huge runtime difference between the select query
> used in the trigger function and the one run from the SQL editor.
contrib/auto_explain might be useful in seeing what's going on, in
particular it would tell us whether or not a different plan is
On Mon, Nov 2, 2015 at 7:52 PM, FattahRozzaq wrote:
> If I install the PostgreSQL on Linux (Debian),
> How much the limit of max_connections that PostgreSQL can take?
> How much the limit of max_prepared_transactions that PostgreSQL can take?
Per definition, those parameters have a max value of 2
Hi all,
If I install the PostgreSQL on Linux (Debian),
How much the limit of max_connections that PostgreSQL can take?
How much the limit of max_prepared_transactions that PostgreSQL can take?
How much the limit of max_files_per_process that PostgreSQL can take?
Regards,
Fattah
--
--
Sent via
Hello,
I´ve got a table custom_data which essentially contains a number of key/value
pairs. This table holds a large number (about 40M) of records and I need the
distinct keys and values for some reasons. Selecting those distinct data takes
a couple of seconds, so I decided to maintain a separa
Thank you.
That's really sad news. This means that even though there is an index that
lets you find rows you want almost immediately, to retrieve primary keys,
you still have to do a lot of disk io.
I created a new table that contains only primary key and tsvector value,
and (at least that's how