try JOINs...
time that psql or pgAdmin shows is purely the postgresql time.
Question here was about the actual application's time. Sometimes the data
transmission, fetch and processing on the app's side can take longer than
the 'postgresql' time.
On Wed, Apr 7, 2010 at 1:20 PM, wrote:
>
> Guys,
>
> Thanks for trying and opening your mind.
> If you want to know how Oracle addressed this issue, here it is: index
> on two columns. I remember that they told me in the training postgres has
> no this kind of index, can someone clarify?
>
2010/4/7
>
> Do you mean one index on two columns?
>
> something like this: create index idx1 on tb1(col1, col2);
>
yup :) It would be quite useless without that feature.
Don't listen to oracle folks, they obviously know not much about products
others than oracle db(s).
--
GJ
starting with 8.3, there's this new feature called HOT, which helps a lot
when you do loads of updates.
Plus writer is much quicker (30-40% sometimes), and autovacuum behaves much
nicer.
Bottom line, upgrade to 8.3, 8.1 had autovacuum disabled by default for a
reason.
That really sounds like hardware issue. The I/O causes the system to freeze
basically.
Happens sometimes on cheaper hardware.
On Sat, Apr 24, 2010 at 2:23 PM, Merlin Moncure wrote:
>
> Well, you missed the most important part: not using cursors at all.
> Instead of declaring a cursor and looping it to build the array, build
> it with array(). That's what I've been saying: arrays can completely
> displace both temp tabl
On Mon, May 17, 2010 at 12:54 PM, Jon Nelson wrote:
> On Mon, May 17, 2010 at 5:10 AM, Pierre C wrote:
>> - or use a JOIN delete with a virtual VALUES table
>> - or fill a temp table with ids and use a JOIN DELETE
>
> What is a virtual VALUES table? Can you give me an example of using a
> virtual
again VALUES(1,2), (2,3), ; is a 'virtual table', as he calls it.
It really is not a table to postgresql. I guess he is just using that
naming convention.
--
Sent via pgsql-performance mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.
temporary tables are handled pretty much like the regular table. The
magic happens on schema level, new schema is setup for connection, so
that it can access its own temporary tables.
Temporary tables also are not autovacuumed.
And that's pretty much the most of the differences.
--
Sent via pgsql
WAL does the same thing to DB journaling does to the FS.
Plus allows you to roll back (PITR).
As for the RAM, it will be in ram as long as OS decides to keep it in
RAM cache, and/or its in the shared buffers memory.
Unless you have a lot of doubt about the two, I don't think it makes
too much sens
WAL matters in performance. Hence why it is advisable to have it on a
separate drive :)
--
Sent via pgsql-performance mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Hi folks,
is there a general problem with raid10 performance postgresql on it?
We see very low performance on writes (2-3x slower than on less
performant servers). I wonder if it is solely problem of raid10
configuration, or if it is postgresql's thing.
Would moving WAL dir to separate disk help
joining on varchars is always going to be very expensive. Longer the
value is, more expensive it will be. Consider going for surrogate
keys.
--
Sent via pgsql-performance mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql
On Mon, Aug 23, 2010 at 2:47 PM, Kevin Grittner
wrote:
> Grzegorz Jaœkiewicz wrote:
>
>> joining on varchars is always going to be very expensive. Longer
>> the value is, more expensive it will be. Consider going for
>> surrogate keys.
>
> Surrogate keys come with their own set of costs and introd
I am not a fan of 'do this - this is best' response to queries like that.
Rather: this is what you should try, and choose whichever one suits you better.
So, rather than 'natural keys ftw', I am giving him another option to
choose from.
You see, in my world, I was able to improve some large dbs pe
Oh, and I second using same types in joins especially, very much so :)
--
Sent via pgsql-performance mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
implementation wise, count(*) is faster. Very easy to test:
SELECT COUNT(*) FROM generate_series(1,100) a, generate_series(1,1000) b;
SELECT COUNT(a) FROM generate_series(1,100) a, generate_series(1,1000) b;
;]
--
Sent via pgsql-performance mailing list ([email protected])
To m
2010/10/26 Szymon Guz :
>
> Well, strange. Why is that slower?
To answer that fully, you would need to see the implementation.
suffice to say,
count(a) does:
if (a <> NULL)
{
count++;
}
and count(*) does:
count++;
--
GJ
--
Sent via pgsql-performance mailing list (pgsql-performance@po
you're joining on more than one key. That always hurts performance.
--
Sent via pgsql-performance mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Try going through the archives first because your question probably
has been answered many times already (altho there is no definitive
question as to what server postgresql would need to run to fit your
purpose).
Also, this is English list. If you prefer to ask questions in
Brazilian/Portuguese th
do you have any indexes on that table ?
--
Sent via pgsql-performance mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
It all depends at the end of a day, how crucial is that functionality
to your app ?
If very, I would add to every insert/update a trigger, that would
update info on other table(s) with stats per character.
Other option, is to write a function in C that would parse word and
chop it in C,O (characte
On Fri, Feb 6, 2009 at 3:43 PM, Mario Splivalo
wrote:
> Besides PK and uq-constraint indices I have this index:
>
> CREATE INDEX transactions_idx__client_data ON transactions
> USING btree (transaction_client_id, transaction_destination_id,
> transaction_operator_id, transaction_application_id,
>
Hey folks,
I have few tables, that inherit from table X.
The query I perform, tries to obtain information about changes in all
tables that inherit from X,
aside from that, I have table Y that keeps another information related
to changes, but in bit different schema.
Anyway, there is one unique id
On Fri, Feb 6, 2009 at 6:15 PM, Robert Haas wrote:
> Just guessing here, but what values are you using for
> join_collapse_limit and from_collapse_limit, and what happens if you
> make them much bigger (like 100)?
both default values, afair = 8.
--
Sent via pgsql-performance mailing list (pgsql-
On Fri, Feb 6, 2009 at 6:20 PM, Grzegorz Jaśkiewicz wrote:
> On Fri, Feb 6, 2009 at 6:15 PM, Robert Haas wrote:
>> Just guessing here, but what values are you using for
>> join_collapse_limit and from_collapse_limit, and what happens if you
>> make them much bigger (like 10
that helped, thanks a lot Tom.
Looks like additional thing on 'pet peeves' list (from -general).
:P
--
Sent via pgsql-performance mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Fri, Feb 6, 2009 at 9:50 PM, Tom Lane wrote:
> The UNION arms have to all be the same data type in order to have
> restrictions pushed down through the UNION. You did not show us
> the table declarations for your first example, but I bet that updateid
> isn't the same type in both. (And yes,
so Tom,
with a little help on Irc from Andrew (RhodiumToad) I got it 'fixed',
but I know this is just a hack:
Index: src/backend/utils/adt/selfuncs.c
===
RCS file: /projects/cvsroot/pgsql/src/backend/utils/adt/selfuncs.c,v
retrieving
On Sat, Feb 7, 2009 at 5:39 PM, Tom Lane wrote:
>
> You seem to be laboring under the delusion that this is considered a
> bug. It's a necessary semantic restriction, because the pushed-down
> expression could mean different things when applied to different
> data types.
Very true Tom, still I w
and frankly I still (and few others) think it is a defect, for domain
with some base type should be treated as such. It is after all treated
that way when you create index.
--
Sent via pgsql-performance mailing list ([email protected])
To make changes to your subscription:
http://w
On Sun, Feb 8, 2009 at 6:34 PM, Tom Lane wrote:
> It's possible that there are specific cases where the UNION optimization
> checks could allow domains to be treated as their base types, but
> blindly smashing both sides of the check to base is going to break more
> cases than it fixes.
What my
On Wed, Feb 11, 2009 at 2:57 PM, Rajesh Kumar Mallah
wrote:
>> vacuum_cost_delay = 150
>> vacuum_cost_page_hit = 1
>> vacuum_cost_page_miss = 10
>> vacuum_cost_page_dirty = 20
>> vacuum_cost_limit = 1000
>> autovacuum_vacuum_cost_delay = 300
>
> why is it not a good idea to give end users control
have you tried hanging bunch of raid1 to linux's md, and let it do
raid0 for you ?
I heard plenty of stories where this actually sped up performance. One
noticeable is case of youtube servers.
--
Sent via pgsql-performance mailing list ([email protected])
To make changes to your su
2009/2/18 Rajesh Kumar Mallah :
> On Wed, Feb 18, 2009 at 2:27 PM, Grzegorz Jaśkiewicz
> wrote:
>> have you tried hanging bunch of raid1 to linux's md, and let it do
>> raid0 for you ?
>
> Hmmm , i will have only 3 bunches in that case as system has to boot
>
Just as a question to Tom and team,
I saw a post a bit ago, about plans for 8.4, and Tom said it is very
likely that 8.4 will rewrite subselects into left joins, is it still
in plans?
I mean query like:
select id from foo where id not in ( select id from bar);
into:
select f.id from foo f left jo
On Fri, Feb 20, 2009 at 11:14 AM, marcin mank wrote:
>> Just as a question to Tom and team,
>
> maybe it`s time for asktom.postgresql.org? Oracle has it :)
hehe,
on the other hand - that would make my ppl here very skilfull, the
only reason I started to praise them about joins, and stuff - is
be
On Fri, Feb 20, 2009 at 3:33 PM, Tom Lane wrote:
> =?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= writes:
>> I mean query like:
>> select id from foo where id not in ( select id from bar);
>> into:
>> select f.id from foo f left join bar b on f.id=b.id where b.id is null;
>
> Postgres does not do that, bec
after your recent commit Tom, the cost is sky-high, and also it takes
ages again with subselect version. In case of two table join. I have
to try the three way one.
--
Sent via pgsql-performance mailing list ([email protected])
To make changes to your subscription:
http://www.postg
the foo bar example above, with notion that all columns are NOT NULL
behaves much different now. I noticed, that some of the 'anti join'
stuff has changed in cvs recently, but I don't know if that's to
blame.
Basically, what I can see, is that the subselect case is no longer of
lower cost, to the l
but then you have 10 questions a week from windows people about
password, and yet you haven't remove that :P
--
Sent via pgsql-performance mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
explain select ss, ARRAY(select id from foo where ss>0 and id between
7 and 156 order by random() limit 3) as v from
generate_series(1,100) ss;
QUERY PLAN
Function Scan on
Question to core developers
if I rank() a table, grouping by foo - but only will want to get first
X result for every rank.
Will postgresql be able to optimize that, or is it something left over
for 8.5 in general?
--
GJ
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.
On Wed, Mar 11, 2009 at 1:46 PM, Jeff wrote:
> I've got a couple x25-e's in production now and they are working like a
> champ. (In fact, I've got another box being built with all x25s in it. its
> going to smoke!)
>
> Anyway, I was just reading another thread on here and that made me wonder
> ab
On Thu, Mar 12, 2009 at 3:13 PM, Kevin Grittner
wrote:
Scott Carey wrote:
>> "Kevin Grittner" wrote:
>>
>>> I'm a lot more interested in what's happening between 60 and 180
>>> than over 1000, personally. If there was a RAID involved, I'd put
>>> it down to better use of the numerous spind
acording to kernel folks, anticipatory scheduler is even better for dbs.
Oh well, it probably means everyone has to test it on their own at the
end of day.
--
Sent via pgsql-performance mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org
On Thu, Apr 9, 2009 at 3:32 PM, Matthew Wakeling wrote:
> On Thu, 9 Apr 2009, Grzegorz Jaśkiewicz wrote:
>>
>> acording to kernel folks, anticipatory scheduler is even better for dbs.
>> Oh well, it probably means everyone has to test it on their own at the
>> end of da
On Thu, Apr 9, 2009 at 3:42 PM, Kevin Grittner
wrote:
> Grzegorz Jaœkiewicz wrote:
>> (btw, CFQ is the anticipatory scheduler).
>
> These guys have it wrong?:
>
> http://www.wlug.org.nz/LinuxIoScheduler
sorry, I meant it replaced it :) (is default now).
--
GJ
--
Sent via pgsql-performance
On Fri, Apr 10, 2009 at 7:07 PM, Josh Berkus wrote:
> Yet 1000's of users are running PostgreSQL on Windows in production. It
> really depends on what kind of application you're running, and what its
> demands are. For a CMS or a contact manager or a personnel directory? No
> problem. For a cen
2009/4/13 Ognjen Blagojevic :
> It is a student database for the college which is a client of ours. The size
> of the database should be around 1GB, half being binary data (images). Not
> more than 100 users at the time will be working with the application.
nice, if you want to store pics, I sugge
On Wed, Apr 15, 2009 at 1:41 AM, Brian Cox wrote:
> ts_defect_meta_values has 460M rows. The following query, in retrospect not
> too surprisingly, runs out of memory on a 32 bit postgres:
>
> update ts_defect_meta_values set ts_defect_date=(select ts_occur_date from
> ts_defects where ts_id=ts_de
create index foobar on table(row desc);
--
Sent via pgsql-performance mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Thu, Apr 16, 2009 at 11:37 AM, Matthew Wakeling wrote:
>
> I have a query that is executed really badly by Postgres. It is a nine table
> join, where two of the tables are represented in a view. If I remove one of
> the tables from the query, then the query runs very quickly using a
> completel
On Thu, Apr 16, 2009 at 11:37 AM, Matthew Wakeling wrote:
> SELECT *
> FROM
> gene AS a1_,
> intergenicregion AS a2_,
> regulatoryregion AS a3_,
> chromosome AS a4_,
> location AS a5_,
> dataset AS a6_,
> LocatedSequenceFeatureOverlappingFeatures AS indirect0,
> BioEntities
crawler=# select * from assigments;
jobid | timeout | workerid
---+-+--
(0 rows)
Time: 0.705 ms
crawler=# \d+ assigments
Table "public.assigments"
Column | Type |Modifiers
|
2009/4/18 Tom Lane :
> =?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= writes:
>> That expected 1510 rows in 'assigments' seems to be pretty off,
>
> The planner does not trust an empty table to stay empty. Every
> Postgres version in living memory has acted like that; it's not
> new to 8.4.
ok, thanks
Qui
2009/4/18 Heikki Linnakangas :
> Grzegorz Jaśkiewicz wrote:
>>
>> Can correlation be negative ?
>
> Yes, if the data in the column are in descending order. For example:
>
> postgres=# CREATE TABLE foo(id int4);
> CREATE TABLE
> postgres=# INSERT INTO foo SELE
BETWEEN X AND Y
On Mon, Apr 20, 2009 at 2:55 PM, Rafael Domiciano
wrote:
> Hello People,
>
> I have initiated a work to review the sqls of our internal software.
> Lot of them he problem are about sql logic, or join with table unecessary,
> and so on.
> But software has lot of sql with date, doin
EXISTS won't help much either, postgresql is not too fast, when it
comes to that sort of approach.
join is always going to be fast, it is about time you learn joins and
use them ;)
--
Sent via pgsql-performance mailing list ([email protected])
To make changes to your subscription:
use join instead of where in();
--
Sent via pgsql-performance mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
2009/5/25 Scott Marlowe :
>
> So, in 2000 databases, there's only an average of 2 relations per db
> and 102 dead rows? Cause that's all you got room for with those
> settings.
>
> Whats the last 20 or so lines of vacuum verbose as run by a superuser say?
according to
http://www.postgresql.org/
2009/5/25 Łukasz Jagiełło :
> W dniu 25 maja 2009 17:32 użytkownik Scott Marlowe
> napisał:
>>> Recent change postgresql server from Amazon EC2 small into large one.
>>> That gives me x86_64 arch, two core cpu and 7.5GB ram. Atm got almost
>>> ~2000 small databases at that server and autovacuum wo
try creating index on all three columns.
Btw, 38ms is pretty fast. If you run that query very often, do prepare
it, cos I reckon it takes few ms to actually create plan for it.
--
Sent via pgsql-performance mailing list ([email protected])
To make changes to your subscription:
http
you have to vacuum analyze after you've created index, afaik.
No, count(*) is still counting rows.
--
Sent via pgsql-performance mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
depends on how soon do you need to access that data after it's being
created, the way I do it in my systems, I get data from 8 points, bit
less than you - but I dump it to csv, and import it on database host
(separate server).
now, you could go to BDB or whatever, but that's not the solution.
So,
On Fri, May 29, 2009 at 2:54 AM, Greg Smith wrote:
> The PostgreSQL connection handler is known to be bad at handling high
> connection loads compared to the popular pooling projects, so you really
> shouldn't throw this problem at it. While kernel problems stack on top of
> that, you really sho
2009/5/29 Scott Marlowe :
>> if it is implemented somewhere else better, shouldn't that make it
>> obvious that postgresql should solve it internally ? It is really
>> annoying to hear all the time that you should add additional path of
>> execution to already complex stack, and rely on more code
2009/5/29 Scott Marlowe :
>
> Both Oracle and PostgreSQL have fairly heavy backend processes, and
> running hundreds of them on either database is a mistake. Sure,
> Oracle can handle more transactions and scales a bit better, but no
> one wants to have to buy a 128 way E15K to handle the load
damn I agree with you Scott. I wish I had enough cash here to employ
Tom and other pg magicians to improve performance for all of us ;)
Thing is tho, postgresql is mostly used by companies, that either
don't have that sort of cash, but still like to get the performance,
or companies that have 'why
Postgresql isn't very efficient with subselects like that,
try:
explain select c.id from content c LEFT JOIN (select min(id) AS id
from content group by hash) cg ON cg.id=c.id WHERE cg.id is null;
--
Sent via pgsql-performance mailing list ([email protected])
To make changes to you
On Wed, Jun 17, 2009 at 8:33 AM, Albe Laurenz wrote:
>
> I don't understand your data model well enough to understand
> the query, so I can only give you general hints (which you probably
> already know):
He is effectively joining same table 4 times in a for loop, to get
result, this is veeery in
On Thu, Jun 18, 2009 at 6:06 PM, Brian Cox wrote:
> these queries are still running now 27.5 hours later... These queries are
> generated by some java code and in putting it into a test program so I could
> capture the queries, I failed to get the id range correct -- sorry for
> wasting your time
On Thu, Jun 18, 2009 at 6:16 PM, Brian Cox wrote:
> Grzegorz Jakiewicz [[email protected]] wrote:
>
>> this might be quite bogus question, just a hit - but what is your
>> work_mem set to ?
>> Guys, isn't postgresql giving hudge cost, when it can't sort in memory ?
>>
> work_mem = 64MB
>
try incr
On Sun, Jun 21, 2009 at 9:01 PM, Justin Graf wrote:
> work_mem = 51024 # min 64, size in KB
>
> Thats allot memory dedicated to work mem if you have 30 connections open
> this could eat up 1.5gigs pushing the data out of cache.
I thought work memory is max memory that can be
not better just to store last time user visited the topic ? or forum in
general, and compare that ?
On Thu, Jul 9, 2009 at 5:26 PM, Craig James wrote:
> Suppose I have a large table with a small-cardinality CATEGORY column (say,
> categories 1..5). I need to sort by an arbitrary (i.e. user-specified)
> mapping of CATEGORY, something like this:
>
> 1 => 'z'
> 2 => 'a'
> 3 => 'b'
> 4 => 'w'
>
2009/7/9 Tom Lane :
> =?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= writes:
>> On Thu, Jul 9, 2009 at 5:26 PM, Craig James
>> wrote:
>>> Suppose I have a large table with a small-cardinality CATEGORY column (say,
>>> categories 1..5). I need to sort by an arbitrary (i.e. user-specified)
>>> mapping of CA
On Tue, Jul 21, 2009 at 1:42 PM, Doug Hunley wrote:
> Just wondering is the issue referenced in
> http://archives.postgresql.org/pgsql-performance/2005-11/msg00415.php
> is still present in 8.4 or if some tunable (or other) made the use of
> hyperthreading a non-issue. We're looking to upgrade our
On Tue, Jul 21, 2009 at 3:16 PM, Scott Marlowe wrote:
> On Tue, Jul 21, 2009 at 6:42 AM, Doug Hunley wrote:
>> Just wondering is the issue referenced in
>> http://archives.postgresql.org/pgsql-performance/2005-11/msg00415.php
>> is still present in 8.4 or if some tunable (or other) made the use of
how about normalizing the schema for start ?
by the looks of it, you have huge table,with plenty of varchars, that
smells like bad design of db.
--
Sent via pgsql-performance mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/p
that seems to be the killer:
and time >= extract ('epoch' from timestamp '2009-08-12')
and time < extract ('epoch' from timestamp '2009-08-13' )
You probably need an index on time/epoch:
CREATE INDEX foo ON table(extract ('epoch' from timestamp time );
or something like that, vacuum analyze and
postgresql was faster than the files ;)
(sorry, I just couldn't resist).
--
Sent via pgsql-performance mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Learn it to not generate with "WITH IN (subq)", is this can be quite
slow on postgresql. Use joins instead.
looks like planner was wrong about rowcount in one place: Hash IN Join
(cost=2204.80..4809.31 rows=292 width=202) (actual
time=12.856..283.916 rows=15702 loops=1)
I have no idea why, proba
On Tue, Sep 15, 2009 at 9:10 PM, Andrzej Zawadzki wrote:
> So, I was close - bad index... DESCending is much better.
> Thanks to Grzegorz Ja\skiewicz hi has strengthened me in the conjecture.
>
> I'm posting this - maybe someone will find something useful in that case.
>
> ps. query was and is g
not only that's slow, but limited as you can see. Use something like:
http://gjsql.wordpress.com/2009/04/19/how-to-speed-up-index-on-bytea-text-etc/
instead.
--
Sent via pgsql-performance mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.o
On Tue, Sep 22, 2009 at 1:36 PM, Alan McKay wrote:
> Too high? How high is too high?
in a very simple scenario, you have 100 connections opened, and all of
them run the query that was the reason you bumped work_mem to 256M.
All of the sudden postgresql starts to complain about lack of ram,
beca
On Tue, Sep 22, 2009 at 1:46 PM, Alan McKay wrote:
>> Best practice to avoid that, is to bump the work_mem temporarily
>> before the query, and than lower it again, lowers the chance of memory
>> exhaustion.
>
> Interesting - I can do that dynamically?
you can do set work_mem=128M; select 1; set
On Thu, Sep 24, 2009 at 9:27 AM, wrote:
> Hi.
>
> I have a transaction running at the database for around 20 hours .. still
> isn't done. But during the last hours it has come to the point where it
> really hurts performance of "other queries".
>
> Given pg_stat_activity output there seems to be
On Fri, Sep 25, 2009 at 9:06 AM, Shiva Raman wrote:
> Hi Gerhard
> I also found the pg_log has 73 G of data .
>
> clusternode2:/var/lib/pgsql/data # du -sh pg_log/
> 73G pg_log/
>
> Is it necessary to keep this Log files? Can i backup the logs and delete it
> from the original directory ? Is
2009/9/25 Shiva Raman
> As suggested, i had changed the log_statement='ddl' and now it is logging
> only
> the ddl statements . thanks for the tip.
> Can i delete the old log files in pg_log after backing up as zip archive ?
> is it neccesary to keep those log files ?
>
they're yours, you can d
if you reuse that set a lot, how about storing it in a table , and doing the
join on db side ? if it is large, it sometimes makes sense to create temp
table just for single query (I use that sort of stuff for comparing with few
M records).
But temp tables in that case have to be short lived, as the
On Mon, Oct 5, 2009 at 1:24 PM, Omar Kilani wrote:
>
>
> I'm not really sure what the alternatives are -- it never really makes
> sense to get the selectivity for thousands of items in the IN clause.
> I've never seen a different plan for the same query against a DB with
> that patch vs without -
On Mon, Oct 5, 2009 at 2:52 PM, Matthew Wakeling wrote:
>
> mnw21-modmine-r13features-copy=# select count(*) from project;
> count
> ---
>10
> (1 row)
>
> mnw21-modmine-r13features-copy=# select count(*) from intermineobject;
> count
> --
> 26344616
> (1 row)
>
> mnw21-modmine-r
2009/10/5 Matthew Wakeling
>
> Yes, that does work, but only because id is NOT NULL. I thought Postgres
> 8.4 had had a load of these join types unified to make it less important how
> the query is written?
>
well, as a rule of thumb - unless you can't think of a default value of
column - don't
On Mon, Oct 5, 2009 at 8:35 PM, Guy Rouillier wrote:
> Grzegorz Jaśkiewicz wrote:
>
>>
>> well, as a rule of thumb - unless you can't think of a default value of
>> column - don't use nulls. So using nulls as default 'idunno' - is a bad
>>
On Mon, Oct 12, 2009 at 12:21 PM, S Arvind wrote:
> In the below query both table has less than 1 million data. Can u tell me
> the reason of this plan?
> why its takin extensive cost , seq scan and sorting?? wat is Materialize?
>
> select 1 from service_detail
> left join non_service_detail on
btw, what's the version of db ?
what's the work_mem setting ?
try setting work_mem to higher value. As postgresql will fallback to disc
sorting if the content doesn't fit in work_mem, which it probably doesn't
(8.4+ show the memory usage for sorting, which your explain doesn't have).
2009/10/12 Matthew Wakeling
> This is an EXPLAIN, not an EXPLAIN ANALYSE. If it was an EXPLAIN ANALYSE,
> it would show how much memory was used, and whether it was a disc sort or an
> in-memory sort. As it is only an EXPLAIN, the query hasn't actually been
> run, and we have no information about
2009/10/12 S Arvind
> Thanks Grzegorz,
> But work memory is for each process (connection) rt? so if i keep
> more then 10MB will not affect the overall performance ?
>
it will. But the memory is only allocated when needed.
You can always set it before running that particular query, and th
1 - 100 of 139 matches
Mail list logo