scott.marlowe wrote:
sort_mem might do with a small bump, especially if you're only handling a
few connections at a time. Be careful, it's per sort, and measured in
megs, so it's easy for folks to set it too high and make their machine
start flushing too much kernel cache, which will slow down
Thomas F.O'Connell wrote:
I'm seeing some weird behavior on a repurposed server that was wiped
clean and set up to run as a database and application server with
postgres and Apache, as well as some command-line PHP scripts.
The box itself is a quad processor (2.4 GHz Intel Xeons) Debian woody
Hi, I am re-posting my question here after trying to find a solution
in the PHP pgsql list with no luck.
I am experiencing some performance issues that I think are stemming
from prepared statements. I have a pretty simple query:
SELECT cl.idOffer,cl.idaffiliate ,cl.subid,cl.datetime
FROM click
Hi Merlin,
On Tue, Dec 30, 2008 at 11:42 AM, Merlin Moncure mmonc...@gmail.com wrote:
On Tue, Dec 30, 2008 at 1:59 PM, bricklen brick...@gmail.com wrote:
Hi, I am re-posting my question here after trying to find a solution
in the PHP pgsql list with no luck.
I am experiencing some
Hi Scott,
On Tue, Dec 30, 2008 at 12:09 PM, Scott Marlowe scott.marl...@gmail.com wrote:
On Tue, Dec 30, 2008 at 12:42 PM, Merlin Moncure mmonc...@gmail.com wrote:
On Tue, Dec 30, 2008 at 1:59 PM, bricklen brick...@gmail.com wrote:
Hi, I am re-posting my question here after trying to find
(according to a
warning in the postgresql docs at
http://wiki.postgresql.org/wiki/8.1.4_et._al._Security_Release_FAQ).
Thanks!
Bricklen
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql
Again, I agree completely. What I am after I guess are some pointers
on where to look for that, with regards to PHP. Whatever I turn up, I
will turn over to our developers, but before I do that I want to be
sure I am giving them the correct advice.
Thanks,
Bricklen
--
Sent via pgsql-performance
Hi Guillaume,
On Wed, Dec 31, 2008 at 1:12 AM, Guillaume Smet
guillaume.s...@gmail.com wrote:
On Tue, Dec 30, 2008 at 7:59 PM, bricklen brick...@gmail.com wrote:
I would like to continue to use bind variables to prevent sql
injection, but I'd like to force a plan re-parse for every single
On Mon, Jan 12, 2009 at 2:59 PM, Scott Marlowe scott.marl...@gmail.com wrote:
OK, I've got a query that's running slow the first time, then fast.
But I can't see where the time is really being spend on the first run.
Query and plan attached to preserve formatting.
The index scan and nested
Hi, I have a question about a db-wide vacuum that I am running that is
taking a much longer time than normal. We switched over to our warm standby
server today -- which is virtually identical to the source db server -- and
I initiated a vacuum analyze verbose. Normally this process wouldn't take
Yeah, there's a lot. Way more than I am accustomed to seeing from the same
command on the previous server.
On Thu, Aug 27, 2009 at 4:05 PM, Tom Lane t...@sss.pgh.pa.us wrote:
bricklen brick...@gmail.com writes:
Hi, I have a question about a db-wide vacuum that I am running that is
taking
On Wed, Sep 2, 2009 at 8:05 AM, Kevin Kempter kev...@consistentstate.comwrote:
the explain plan shows most any query scans/hits all partitions even if
we specify the partition key:
explain select * from pwreport.url_hits where time
date_part('epoch'::text, '2009-08-12'::timestamp
In the following query, We are seeing a sub-optimal plan being chosen. The
following results are after running the query several times (after each
change).
dev1=# select version();
version
Hi Tomas,
2009/9/10 t...@fuzzy.cz
default_statistics_target = 100 (tried with 500, no change). Vacuum
analyzed
before initial query, and after each change to default_statistics_target.
Modifying the statistics target is useful only if the estimates are
seriously off, which is not your
On Thu, Sep 10, 2009 at 8:43 AM, Tom Lane t...@sss.pgh.pa.us wrote:
bricklen brick...@gmail.com writes:
Is there any other data I can provide to shed some light on this?
The table and index definitions?
The straight indexscan would probably win if the index column order
were ofid, date
On Thu, Sep 10, 2009 at 9:57 AM, Robert Haas robertmh...@gmail.com wrote:
2009/9/10 t...@fuzzy.cz:
Playing around with seq_page_cost (1) and random_page_cost (1), I can
get
the correct index selected. Applying those same settings to our
production
server does not produce the optimal
On Thu, Sep 10, 2009 at 10:02 AM, Robert Haas robertmh...@gmail.com wrote:
On Thu, Sep 10, 2009 at 12:56 PM, bricklen brick...@gmail.com wrote:
On Thu, Sep 10, 2009 at 8:43 AM, Tom Lane t...@sss.pgh.pa.us wrote:
bricklen brick...@gmail.com writes:
Is there any other data I can provide
On Thu, Sep 10, 2009 at 10:07 AM, bricklen brick...@gmail.com wrote:
On Thu, Sep 10, 2009 at 10:02 AM, Robert Haas robertmh...@gmail.comwrote:
On Thu, Sep 10, 2009 at 12:56 PM, bricklen brick...@gmail.com wrote:
On Thu, Sep 10, 2009 at 8:43 AM, Tom Lane t...@sss.pgh.pa.us wrote:
bricklen
On Thu, Sep 10, 2009 at 10:56 AM, Tom Lane t...@sss.pgh.pa.us wrote:
bricklen brick...@gmail.com writes:
I just created a new index as Tom said, and the query *does* use the new
index (where ofid precedes date in the definition).
And is it indeed faster than the other alternatives
2009/10/13 Grzegorz Jaśkiewicz gryz...@gmail.com:
2009/10/13 Shaul Dar shaul...@gmail.com
Sorry, I guess I wasn't clear.
I have an existing table in my DB, and it doesn't have a column with
serial values (actually it did originally, but due to later deletions of
about 2/3 of the rows the
On Mon, Aug 30, 2010 at 3:28 PM, Kevin Kempter
cs_...@consistentstate.com wrote:
Hi all ;
we have an automated partition creation process that includes the creation of
an FK constraint. we have a few other servers with similar scenarios and this
is the only server that stinks per when we
On Sat, Oct 9, 2010 at 4:26 PM, Neil Whelchel neil.whelc...@gmail.com wrote:
Maybe an
estimate(*) that works like count but gives an answer from the index without
checking visibility? I am sure that this would be good enough to make a page
list, it is really no big deal if it errors on the
On Thu, Oct 14, 2010 at 12:43 PM, Tony Capobianco
tcapobia...@prospectiv.com wrote:
We have 4 quad-core processors and 32GB of RAM. The below query uses
the members_sorted_idx_001 index in oracle, but in postgres, the
optimizer chooses a sequential scan.
explain analyze create table
conversionrejected_pk on
conversionrejected cr (cost=0.00..7.17 rows=1 width=31) (actual
time=0.563..0.564 rows=1 loops=72)
Index Cond: (cr.idconversion = c.id)
Total runtime: 3214.972 ms
Thanks,
Bricklen
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org
On Tue, Nov 9, 2010 at 2:48 PM, Andy Colson a...@squeakycode.net wrote:
On 11/9/2010 3:26 PM, bricklen wrote:
- Seq Scan on conversionrejected cr (cost=0.00..191921.82
rows=11012682 width=31) (actual time=0.003..1515.816 rows=11012682
loops=72)
Total runtime: 292668.992 ms
On Tue, Nov 9, 2010 at 3:29 PM, Tom Lane t...@sss.pgh.pa.us wrote:
bricklen brick...@gmail.com writes:
I have a query that is getting a pretty bad plan due to a massively
incorrect count of expected rows.
The query doesn't seem to match the plan. Where is that OR (c.id =
38441828354::bigint
On Tue, Nov 9, 2010 at 3:55 PM, Tom Lane t...@sss.pgh.pa.us wrote:
bricklen brick...@gmail.com writes:
On Tue, Nov 9, 2010 at 3:29 PM, Tom Lane t...@sss.pgh.pa.us wrote:
The query doesn't seem to match the plan. Where is that OR (c.id =
38441828354::bigint) condition coming from?
Ah sorry
On Fri, Nov 12, 2010 at 5:52 AM, Kenneth Marshall k...@rice.edu wrote:
I cannot speak to your suggestion, but it sounds like you are not
vacuuming enough and a lot of the bloat/randomization would be helped
by making use of HOT updates in which the updates are all in the same
page and are
On Sat, Nov 13, 2010 at 1:32 AM, Marc Mamin m.ma...@intershop.de wrote:
Hello,
Just a short though:
Is it imaginable to compare the prognoses of the plans with the actual
results
and somehow log the worst cases ?
a) to help the DBA locate bad statistics and queries
b) as additional
On Fri, Nov 26, 2010 at 6:22 AM, felix crucialfe...@gmail.com wrote:
Hello,
I have a very large table that I'm not too fond of. I'm revising the design
now.
Up until now its been insert only, storing tracking codes from incoming
webtraffic.
It has 8m rows
It appears to insert fine, but
On Tue, Nov 30, 2010 at 3:23 PM, T.H. calin...@gmail.com wrote:
Just looking into it now, thanks for the suggestion. Is there a reason that
EXISTS is generally faster than IN for this sort of query?
-Tristan
Exists will return immediately upon finding a match -- assuming there is one.
--
On Sat, Dec 4, 2010 at 11:45 AM, felix crucialfe...@gmail.com wrote:
Ok, I caught one : an update that is stuck in waiting.
the first one blocks the second one.
ns | 5902 | nssql | UPDATE fastadder_fastadderstatus SET built
= false WHERE fastadder_fastadderstatus.service_id = 1
Not
On Thu, Feb 17, 2011 at 11:17 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Mark Stosberg m...@summersault.com writes:
- The query planner didn't like it when the ORDER BY referred to a
column value instead of a static value, even when I believe it should
know that the column value never changes.
On Thu, Mar 17, 2011 at 7:25 AM, Michael Andreasen mich...@dunlops.com wrote:
Currently I am using a twin processor box with 2GB of memory and raid 5
disk.
I start postgres before my load with these settings, which have been
suggested.
I restore like this;
pg_restore -Fc -j 4 -i -O -d my_db
I recently had need of an array_except function but couldn't find
any good/existing examples. Based off the neat array_intersect
function at
http://www.postgres.cz/index.php/PostgreSQL_SQL_Tricks#Intersection_of_arrays,
I put together an array_except version to return the array elements
that are
On Thu, Sep 29, 2011 at 8:08 PM, Merlin Moncure mmonc...@gmail.com wrote:
*) Prefer union all to union
*) prefer array constructor to array_agg when not grouping.
*) perhaps consider not reusing 'except' name with different semantic
meaning
Well done
merlin (on phone in bed)
Hi Merlin,
On Fri, Sep 30, 2011 at 5:23 AM, Vitalii Tymchyshyn tiv...@gmail.com wrote:
Since you are using except and not except all, you are not looking at arrays
with duplicates.
For this case next function what the fastest for me:
create or replace function array_except2(anyarray,anyarray) returns
On Fri, Sep 30, 2011 at 5:23 AM, Vitalii Tymchyshyn tiv...@gmail.com wrote:
Since you are using except and not except all, you are not looking at arrays
with duplicates.
For this case next function what the fastest for me:
create or replace function array_except2(anyarray,anyarray) returns
On Fri, Sep 30, 2011 at 2:12 PM, Merlin Moncure mmonc...@gmail.com wrote:
FWIW, speaking as somebody who has no need of this function, array_xor is
a pretty clear name that indicates what's going to happen.
+1 on this -- was going to suggest until you beat me to it. I also
for the record
On Tue, Oct 11, 2011 at 3:16 AM, James Cranch jd...@cam.ac.uk wrote:
This is EXPLAIN ANALYZEd here:
http://explain.depesz.com/s/EiS
Sort Method: external merge Disk: 35712kB
SOFTWARE AND HARDWARE
=
I'm running PostgreSQL 8.4.8 on x86_64-pc-linux-gnu, compiled by GCC
On Mon, Jun 10, 2013 at 4:29 AM, Niels Kristian Schjødt
nielskrist...@autouncle.com wrote:
2013-06-10 11:21:45 GMT FATAL: could not connect to the primary server:
could not connect to server: No route to host
Is the server running on host 192.168.0.4 and accepting
On Mon, Jun 10, 2013 at 8:35 AM, Niels Kristian Schjødt
nielskrist...@autouncle.com wrote:
I can't seem to figure out which steps I need to do, to get the standby
server wiped and get it started as a streaming replication again from
scratch. I tried to follow the steps, from step 6, in here
On Mon, Jun 10, 2013 at 8:51 AM, bricklen brick...@gmail.com wrote:
On Mon, Jun 10, 2013 at 8:35 AM, Niels Kristian Schjødt
nielskrist...@autouncle.com wrote:
I can't seem to figure out which steps I need to do, to get the standby
server wiped and get it started as a streaming replication
On Thu, Jun 20, 2013 at 6:24 PM, Maciek Sakrejda m.sakre...@gmail.comwrote:
SELECT
DISTINCT ON (type) ts, type, details
FROM
observations
WHERE
subject = '...'
ORDER BY
type, ts DESC;
First thing: What is your work_mem set to, and how much RAM is in the
machine? If you look at
On Thu, Jun 20, 2013 at 10:14 PM, Maciek Sakrejda m.sakre...@gmail.comwrote:
On Thu, Jun 20, 2013 at 9:13 PM, bricklen brick...@gmail.com wrote:
On Thu, Jun 20, 2013 at 6:24 PM, Maciek Sakrejda m.sakre...@gmail.comwrote:
SELECT
DISTINCT ON (type) ts, type, details
FROM
observations
On Mon, Jun 24, 2013 at 3:18 PM, Ben midfi...@gmail.com wrote:
create table jointable (
id1 integer not null,
id2 date not null,
id3 integer not null,
id4 time not null,
id5 integer not null
);
and then perform a join against this table.
Is it safe to assume you ANALYZEd the
On Thu, Jun 27, 2013 at 10:17 AM, Shaun Thomas stho...@optionshouse.comwrote:
Well yeah. That's not really the point, though. Aside from existing code,
hard-coding is generally frowned upon. Our devs have been using
CURRENT_DATE and its ilk for over six years now.
Would it help to put the
On Mon, Jul 8, 2013 at 9:21 AM, Tom Harkaway t...@hvpa.com wrote:
The ‘explain’ output for the query is very different between the two
systems.
You ran ANALYZE after loading the data? Can you post the query and EXPLAIN
ANALYZE output?
Also, some tips on getting answers with (potentially)
On Wed, Jul 17, 2013 at 12:50 PM, Ellen Rothman
eroth...@datalinedata.comwrote:
I have the same table definition in two different databases on the same
computer. When I explain a simple query in both of them, one database uses
a sequence scan and the other uses an index scan. If I try to run
On Tue, Jul 30, 2013 at 3:52 AM, Jeison Bedoya jeis...@audifarma.com.cowrote:
Hi everybody,
In recent days, we have seen many processes in reaching the lock held 5000.
Do you know what queries are holding locks? Is that behaviour expected?
At that time my machine will become sluggish
On Wed, Sep 11, 2013 at 3:36 AM, Maximilian Tyrtania
li...@contactking.dewrote:
JOIN emailsendings es ON et. ID = es.emailtemplate_id
ORDER BY
es.sentonat desc
Perhaps on an index on (es.emailtemplate_id, es.sentonat desc) would help?
On Sat, Sep 14, 2013 at 11:28 AM, Craig James cja...@emolecules.com wrote:
I'm trying to do a pg_dump of a database, and it more-or-less just sits
there doing nothing.
What is running in the db? Perhaps there is something blocking the pg_dump?
What does the output of the following query look
On Sat, Sep 14, 2013 at 6:54 PM, Craig James cja...@emolecules.com wrote:
On Sat, Sep 14, 2013 at 11:36 AM, bricklen brick...@gmail.com wrote:
On Sat, Sep 14, 2013 at 11:28 AM, Craig James cja...@emolecules.comwrote:
I'm trying to do a pg_dump of a database, and it more-or-less just sits
On Wed, Sep 25, 2013 at 8:58 AM, Jim Garrison jim.garri...@nwea.org wrote:
I spent about a week optimizing a query in our performance-testing
environment, which has hardware similar to production.
I was able to refactor the query and reduce the runtime from hours to
about 40 seconds, through
On Fri, Nov 8, 2013 at 6:04 AM, Jan Walter j...@commontongue.com wrote:
Hi,
I would like to know, how does the size of the IN list affect query
planner.
I have a query
select distinct on (event_id, tag_id) et.id,
e.id as event_id, t.id as tag_id, t.name,
t.user_id,
On Thu, Nov 21, 2013 at 2:04 PM, Robert DiFalco robert.difa...@gmail.comwrote:
H...I'm not sure why the buffers option didn't work for me, maybe the
heroku psql is out of date.
Did you enclose it in brackets? Eg. EXPLAIN (ANALYZE, BUFFERS) SELECT ...
On Thu, Nov 21, 2013 at 2:58 PM, bricklen brick...@gmail.com wrote:
Did you enclose it in brackets? Eg. EXPLAIN (ANALYZE, BUFFERS) SELECT ...
Never mind, I see it further down. My apologies.
On Thu, Dec 5, 2013 at 2:42 AM, Max maxa...@yahoo.com.br wrote:
We are starting a new project to deploy a solution in cloud with the
possibility to be used for 2.000+ clients. Each of this clients will use
several tables to store their information (our model has about 500+ tables
but there's
On Thu, Dec 5, 2013 at 10:08 AM, Scott Marlowe scott.marl...@gmail.comwrote:
Rules have a lot of overhead. Is there a reason you're not using
defaults or triggers?
Or for even less overhead, load the partitions directly, and preferably use
DEFAULT nextval('some_sequence') as Scott mentioned.
On Fri, Dec 20, 2013 at 7:52 AM, Dave Johansen davejohan...@gmail.comwrote:
It would be nice if there was just a move command, but that seems like
the type of model that we want and we'll probably move to that.
I haven't been following this thread, but this comment caught my eye. Are
you
On Mon, Feb 3, 2014 at 1:35 PM, Pweaver (Paul Weaver)
pwea...@panjiva.comwrote:
table_name stats:
~ 400,000,000 rows
We are deleting 10,000,000s of rows in 100,000 row increments over a few
days time prior/during this slowdown.
If you issue VACUUM or VACUUM ANALYZE after each DELETE, do
On Fri, Mar 21, 2014 at 5:59 PM, Erik van Zijst erik.van.zi...@gmail.comwrote:
Hi there,
I've got a relatively simple query that contains expensive BCRYPT
functions that gets optimized in a way that causes postgres to compute
more bcrypt hashes than necessary, thereby dramatically slowing
On Sat, Mar 22, 2014 at 3:27 PM, Erik van Zijst erik.van.zi...@gmail.comwrote:
Yes, that works (it does at least on my small test database).
However, these queries are generated by a parser that translates
complex parse trees from a higher level DSL that doesn't lend itself
well to logically
On Sat, Mar 22, 2014 at 8:37 PM, Erik van Zijst erik.van.zi...@gmail.comwrote:
On Sat, Mar 22, 2014 at 3:56 PM, bricklen brick...@gmail.com wrote:
On Sat, Mar 22, 2014 at 3:27 PM, Erik van Zijst
erik.van.zi...@gmail.com
I could nonetheless take a stab at it, but life would certainly
On Sun, Mar 30, 2014 at 12:45 PM, Christopher Jackson crjac...@gmail.comwrote:
Hi all,
tl;dr - How can I speed up my count-distinct query?
Depending on how often you need to run that query and how important it is
to you, if you are willing to accept a performance hit on
Query plan for the md5() index test:
Index Scan using lots_of_columns_md5_idx on lots_of_columns
(cost=0.93..3.94 rows=1 width=208) (actual time=0.043..0.043 rows=1 loops=1)
Index Cond: ('1ba23a0668ec17e230d98c270d6664dc'::text =
On Fri, Apr 22, 2016 at 6:57 AM, Rob Imig wrote:
> Just to followup where I'm at, I've constructed a new column which is a
> 100 bit bitstring representing all the flags. Created a b-tree index on
> that column and can now do super fast lookups (2) for specific scenarios
>
On Wed, Jul 12, 2017 at 12:30 AM, Charles Nadeau
wrote:
>
> I use noop as the scheduler because it is better to let the RAID
> controller re-arrange the IO operation before they reach the disk. Read
> ahead is set to 128:
>
> charles@hpdl380g6:~$ cat
On Mon, Apr 24, 2017 at 9:17 PM, Andres Freund wrote:
>
> Questions (answer as many you can comfortably answer):
> - How many MB/s, segments/s do you see on busier servers?
> - What generates the bulk of WAL on your servers (9.5+ can use
> pg_xlogdump --stats to compute
On Thu, May 4, 2017 at 3:52 AM, wrote:
> Hi,
>
> I have a performance problem with my query. As a simplified example, I
> have a table called Book, which has three columns: id, released (timestamp)
> and author_id. I have a need to search for the latest books released
Richard Huxton wrote:
Ken Egervari wrote:
I've tried to use Dan Tow's tuning method
Who? What?
http://www.singingsql.com/
Dan has written some remarkable papers on sql tuning. Some of it is pretty complex, but his book
SQL Tuning is an excellent resource.
--
___
This
Christopher Kings-Lynne wrote:
Another trick you can use with large data sets like this when you want
results
back in seconds is to have regularly updated tables that aggregate the
data
along each column normally aggregated against the main data set.
Maybe some bright person will prove me wrong
Colton A Smith wrote:
Hi:
I have a table called sensors:
Table public.sensor
Column | Type |Modifiers
-+--+-
Yves Vindevogel wrote:
Hi,
rvponp=# explain select * from tblprintjobs order by loginuser,
desceventdate, desceventtime offset 25 limit 25 ;
I have a very simple query on a big table. When I issue a limit
and/or offset clause, the query is not using the index.
Can anyone explain me
to aborting the transaction altogether?
To summarize, I'm looking for the most efficient and fastest way to perform my
upserts. Tips and/or references to pertinent docs are also appreciated!
If any more information is necessary, please let me know.
(postgresql 8.0.3, linux)
Cheers,
Bricklen
Meetesh Karia wrote:
I don't know what this will change wrt how often you need to run VACUUM
(I'm a SQL Server guy), but instead of an update and insert, try a
delete and insert. You'll only have to find the duplicate rows once and
your insert doesn't need a where clause.
Meetesh
Vacuum
Jacques Caron wrote:
I have a similar situation, and the solution I use (though I haven't
really tested many different situations):
- have a trigger ON INSERT which does:
UPDATE set whatever_value=NEW.whatever_value,... WHERE
whatever_key=NEW.whatever.key AND...
IF FOUND THEN
RETURN
Dennis Bjorklund wrote:
On Thu, 23 Jun 2005, Bricklen Anderson wrote:
iii. UNIQUE constraint on table t1. This didn't seem to perform too
badly with fewer rows (preliminary tests), but as you'd expect, on error
the whole transaction would roll back. Is it possible to skip a row if
it causes
evgeny gridasov wrote:
Hi Everybody.
I am going to replace some 'select count(*) from ... where ...' queries
which run on large tables (10M+ rows) with something like
'explain select * from ... where ' and parse planner output after that
to find out its forecast about number of rows the
David Gagnon wrote:
- Index Scan using cr_pk on cr (cost=0.00..6.02 rows=1
width=828) (actual time=0.073..0.077 rows=1 loops=13587)
Index Cond: (((cr.crypnum)::text = 'M'::text) AND
(cr.crnum = outer.cscrnum))
Filter: ((crdate +
Merlin Moncure wrote:
On 2/5/05, Dirk Lutzebaeck [EMAIL PROTECTED] wrote:
snip
Was the original message actually from 2/5/05?
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail
Adam Rich wrote:
Doesn't sound like you want postgres at all Try mysql.
Could you explain your reason for suggesting mysql? I'm simply curious
why you would offer that as a solution.
---(end of broadcast)---
TIP 5: don't forget to increase
Bruce Momjian wrote:
Ravindran G-TLS,Chennai. wrote:
Note: Please bear with us for the disclaimer because it is automated in
the exchange server.
Regards,
Ravi
FYI, we are getting closer to rejecting any email with such a
disclaimer, or emailing you back every time saying we are ignoring the
kevin kempter wrote:
One of the things we need to query is the min date from the master table
- we may explore alternatives for this particular query, however even if
we fix this query I think we have a fundamental issue with the use of
indexes (actuallt the non-use) by the planner.
We had a
84 matches
Mail list logo