On Thu, Jan 13, 2011 at 5:41 PM, Robert Haas wrote:
> On Thu, Jan 13, 2011 at 5:26 PM, Tom Lane wrote:
>> Robert Haas writes:
>>> On Thu, Jan 13, 2011 at 3:12 PM, Jon Nelson
>>> wrote:
>>>> I still think that having UNION do de-duplication of each con
'm all wet, because you'd still have to re-de-duplicate at the
>> end. But then why did the OP get a speedup? *scratches head*
>
> Because it all fix it memory and didnt swap to disk?
Doesn't make sense. The re-de-duplication at the end should use the
same amount of m
ing it back on-list.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
for every row matching the conditions on the driving table, while
> the !EXISTS is just a complement of join. It's all in the basic set theory
> which serves as a model for the relational databases.
As Scott says, the real problem is the NULL handling. The semantics
are otherwise
ree (account_id);
> CREATE INDEX idx_video_created ON video USING btree (created);
> CREATE INDEX idx_video_fts ON video USING gin (fts);
> CREATE INDEX idx_video_hash ON video USING hash (hash);
> (here I tried both gist and gin indexes)
> I have 32Gb ram and 2 core quad E5520
conclusion.
>>
>> Is there a TODO here?
>
> it looks like, yes.
"Modify the planner to better estimate caching effects"?
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ssible
>
> FYI, we do have a documentation section about how to configure Postgres
> for improved performance if you don't care about durability:
>
> http://developer.postgresql.org/pgdocs/postgres/non-durability.html
This sentence looks to me like it should be removed, or perhaps
-> Seq Scan
> on marinerstates msold2 (cost=0.00..2251.66 rows=41307 width=24) (actual
> time=0.002..24.552 rows=41250 loops=1)
> Filter:
> ((state)::text = 'Active'::text)
Looks like the bad selectivity estimate there is what's kill
one in these situations is add a Boolean to the table
that defaults to true, and an ON INSERT trigger that flips the Boolean
for any existing row with the same key to false. Then you can just do
something like "SELECT * FROM tab WHERE latest". And you can create
partial indexes etc:
nerally and autovacuum in particular
that make things much better, including enabling autovacuum by
default, multiple autovacuum worker threads, the visibility map, and
so on. It's fairly likely that everything that the OP is struggling
with on 8.1 would Just Work on 8.4 or 9.0.
--
Robert
On Fri, Jan 21, 2011 at 12:42 PM, Mladen Gogala
wrote:
> On 1/21/2011 12:09 PM, Robert Haas wrote:
>>
>> Looks like the bad selectivity estimate there is what's killing it.
>> Not sure I completely understand why 9.0.2 is coming up with such a
>> bad estimate,
cument is 9 years old. It
might be worth reading something a little more up-to-date. Perhaps:
http://www.postgresql.org/docs/current/static/largeobjects.html
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (
sorting 23960kB. Dividing that by 149557 rows gives ~169 bytes/per
row, or roughly 49 rows per block, which works out to 3k blows, or
about 24MB of data. Clearly we must be hitting a LOT more data than
that, or this would be much faster than it is, I would think.
Any chance this is 9.0.X? It'd
tuple won't fit on the same page as the existing heap tuple
should be treated as non-HOT. But nothing gets split in that case. I
think of a page split as an index event, and if these are HOT updates
there shouldn't be any index changes at all.
Can we see those stats again with n_tup_ins
ally recommend.
http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
http://www.linux.com/learn/tutorials/394523:configuring-postgresql-for-pretty-good-performance
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance maili
On Tue, Feb 1, 2011 at 3:31 PM, Grant Johnson wrote:
> Does vacuum full rewrite the whole table, or only the blocks with free
> space?
The whole table.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list
ving some of the performance benefits of having such
tables in the first place - namely, the local buffer manager. But you
could ANALYZE them by hand.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql
a = 1;
QUERY PLAN
-
Index Scan using foo_pkey on foo (cost=0.00..8.27 rows=1 width=36)
Index Cond: (a = 1)
(2 rows)
You're going to need to come up with actual examples of situations
that you think can be improved upon if you want
yze then rolled back, losing the stats and
forcing the next guy to do it all over again).
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
there, that would have been incredibly
> inappropriate. I've never seen a computer program do anything so
> stupid, actually; so I'm quite sure you're not always operating to
> the level they can manage.
Beep, time out. Everybody take a step or three back and calm down.
you've chosen,
and that's a lot of work and not terribly robust. The most common
type of "hard to fix" query problem - by far - is a bad selectivity
estimate. Being able to hint that would be worth more than any number
of hints about which indexes to use, in my book.
--
Rob
On Thu, Feb 3, 2011 at 4:54 AM, Vitalii Tymchyshyn wrote:
> 02.02.11 20:32, Robert Haas написав(ла):
>>
>> Yeah. Any kind of bulk load into an empty table can be a problem,
>> even if it's not temporary. When you load a bunch of data and then
>> immediately plan
ou're bulk loading for a long time and
it tries to run over and over. I'd really like to suppress all those
asynchronous ANALYZE operations and instead do ONE synchronous one at
the end, when we try to use the data.
Of course, the devil is in the nontrivial details.
--
Robert Haas
E
at actually helped in the cases
where the existing settings aren't already sufficient.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscript
n the common
case, but so far haven't thought of a solution that I'm entirely happy
with.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscri
you pointed in the right
direction...
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
an be done without
significant disruption, but AFAIK nobody's come up with a better idea
yet.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subs
nt to write the code and prove it's better than what we have
now, or some other approach that someone else may implement in the
meantime, hey, this is an open source project, and I like improvements
as much as the next guy. But my prediction for what it's worth is
that the results
yet
you're accusing the people who don't agree with you of being engaged
in a religious war. It seems to me that the shoe is on the other
foot. Religion is when you believe something first and then look for
evidence to support it. Science goes the other direction.
--
Robert Haas
E
a giant data load, so we might need to
change the time of it a little, but if there's a problem with the
operation itself being too costly, this is the first I'm hearing of
it. We've actually worked *really* hard to make it cheap.
--
Robert Haas
EnterpriseDB: http://ww
7;t run faster because of query
planner hints. They ran faster because of things like index-only
scans, fast full index scans, asynchronous I/O, and parallel query.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance maili
On Thu, Feb 3, 2011 at 8:37 PM, wrote:
> On Thu, 3 Feb 2011, Robert Haas wrote:
>
>> On Thu, Feb 3, 2011 at 7:39 PM, wrote:
>>>>
>>>> Yeah, but you'll be passing the entire table through this separate
>>>> process that may only need to see 1
On Fri, Feb 4, 2011 at 9:38 AM, Vitalii Tymchyshyn wrote:
> Actually for me the main "con" with streaming analyze is that it adds
> significant CPU burden to already not too fast load process.
Exactly.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Po
lot position within the block?
Seems like you'd need .
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.o
lf,
> that way the overhead of the load is just copying the data in memory to the
> other process.
I think that's more expensive than you're giving it credit for.
But by all means implement it and post the patch if it works out...!
--
Robert Haas
EnterpriseDB: http://www.enter
we should be willing to provide a way for those people
to not get fired when they hit the 0.1% of queries that can't be fixed
using existing methods. I don't know exactly what the right solution
is off the top of my head, but digging in our heels is not it.
--
Robert Haas
EnterpriseDB
than that,
and the statement on the Todo list gives the wrong impression, IMHO.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Thu, Feb 10, 2011 at 12:01 PM, Tom Lane wrote:
> "Kevin Grittner" writes:
>> Robert Haas wrote:
>>> I don't know exactly what the right solution is off the top of my
>>> head, but digging in our heels is not it.
>
>> Well, I'm co
ing to be awful.
To put that another way, I've founded that the optimizer copes pretty
well with adjusting plans as tables get bloated - mostly by using
index scans rather than sequential scans. It's possible there is some
improvement still to be had there, but I would be a lot more
int
c
> a 1 .4 0
> b .1 1 -.3
> c .2 .3 1
>
> and those correlations could be used to weigh how the single-column
> statistics should be combined.
If you can make it work, I'll take it... it's (much) easier said than
done, though.
--
Robert Haas
En
single column, computed
with a trigger, that contains enough information to test the whole
WHERE-clause condition using a single indexable test against the
column value. Or sometimes you can get around it by partitioning the
data into multiple tables, say with the visible_from
true that in the specific case of a foreign key constraint,
we don't currently have anything in the planner that depends on that.
But I'm hoping to get around to working on inner join removal again
one of these days.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise
On Tue, Feb 22, 2011 at 11:21 PM, Noah Misch wrote:
> On Tue, Feb 22, 2011 at 10:18:36PM -0500, Robert Haas wrote:
>> On Wed, Feb 2, 2011 at 12:20 AM, Noah Misch wrote:
>> >> CREATE TABLE account (account_id SERIAL PRIMARY KEY, name CHARACTER
>> >> VARYING
&g
On Wed, Feb 23, 2011 at 12:31 PM, Tom Lane wrote:
> Robert Haas writes:
>> ...but that begs the question of why DROP INDEX needs an
>> AccessExclusiveLock. It probably needs such a lock *on the index* but
>> I don't see why we'd need it on the table.
>
> S
is and this and then
> I'll plan other part of the query based on statistics of what you've
> fetched"?
I've had that thought, too. It's pretty hard to see how to make ti
work, but I think there are cases where it could be beneficial.
--
Robert Haas
Enterpr
zes x =
4.0::numeric to x = 4::int and x = 3.5::numeric to constant false.
That would be cool, in a way, but I'm not sure it's really worth the
code it would take, unless it falls naturally out of some larger
project in that area.
--
Robert Haas
EnterpriseDB: http://www.e
e index for all the id_user. the forecasts for
> the table directory are +500 millions records and something like 1
> million distinct id_user.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pg
On Sun, Feb 27, 2011 at 1:39 PM, Tom Lane wrote:
> Robert Haas writes:
>> On Tue, Feb 8, 2011 at 5:04 PM, Josh Berkus wrote:
>>> I'm not saying that PostgreSQL couldn't do better on this kind of case,
>>> but that doing better is a major project, not a mino
2011/2/27 Віталій Тимчишин :
>
>
> 27 лютого 2011 р. 19:59 Robert Haas написав:
>>
>> 2011/2/4 Віталій Тимчишин :
>> > Hi, all.
>> > All this optimizer vs hint thread reminded me about crazy idea that got
>> > to
>> > my head some ti
; subqueries and then re-applied after the APPEND,
PostgreSQL 9.1 will know how to do this, FWIW.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscr
On Wed, Mar 2, 2011 at 9:11 AM, Thom Brown wrote:
> On 2 March 2011 19:38, Robert Haas wrote:
>> On Thu, Feb 24, 2011 at 11:38 AM, Dave Johansen
>> wrote:
>>> I also just noticed that an ORDER BY x LIMIT n optimization is not pushed
>>> down through the UNI
esting if it were a write-through cache rather
than a write-back cache, wouldn't it?
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http
nner-or-left doesn't help with join
removal, but it might allow the join to be reordered. Maybe
"non-row-reducing-join" is better terminology than
"inner-or-left-join", but in any case I have a suspicion that inner
join removal will end up being implemented as a special
just select ctid,xmin,xmax,* from pg_index?
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
here.
I am a bit confused. Why would you need to install from source
instead of using an installer (either from EnterpriseDB or installing
via apt-get)?
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-perf
ates in separate transactions
the size grows for a while and then levels off.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Fri, Mar 4, 2011 at 5:26 AM, Vidhya Bondre wrote:
> select ctid,xmin,xmax,* from pg_index gives 2074 records.
Can you put them in a text file and post them here as an attachment?
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pg
On Fri, Mar 4, 2011 at 4:21 AM, Matt Burke wrote:
> Robert Haas wrote:
>> Old row versions have to be kept around until they're no longer of
>> interest to any still-running transaction.
>
> Thanks for the explanation.
>
> Regarding the snippet above, why w
On Wed, Mar 2, 2011 at 11:31 PM, Adarsh Sharma wrote:
> Don't know why it uses Seq Scan on loc_context_terror as i have indexes on
> the desired columns as well.
I don't see how an index scan would help. The query appears to need
all the rows from that table.
--
Robert Haas
En
quite surprising. There are only 14 rows in the table but
PG thinks 2140? Do you have autovacuum turned on? Has this table
been analyzed recently?
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-perform
On Fri, Mar 4, 2011 at 12:00 PM, Mark Thornton wrote:
> On 04/03/2011 16:07, Robert Haas wrote:
>>
>> That seems quite surprising. There are only 14 rows in the table but
>> PG thinks 2140? Do you have autovacuum turned on? Has this table
>> been analyzed recently?
sactionid)::text =
(customerdetails.transactionid)::text)
Now why is there a cast to text there on both sides? Do those two
columns have exactly the same datatype? If not, you probably want to
fix that, as it can make a big difference.
Also, how many rows are there in events_events and how many i
On Mon, Mar 7, 2011 at 3:40 PM, Merlin Moncure wrote:
> On Tue, Feb 22, 2011 at 9:07 PM, Robert Haas wrote:
>> On Fri, Feb 4, 2011 at 7:08 AM, Ivan Voras wrote:
>>> -> BitmapAnd (cost=1282.94..1282.94
>>> rows=1430 width=0) (a
On Tue, Mar 8, 2011 at 4:24 PM, Tom Lane wrote:
> Robert Haas writes:
>> The reason I thought cross-column correlations might be relevant is
>> that the bitmap index scan on news_visible_from is quite accurate
>> (19976 estimated vs. 19932 actual) and the bitmap index scan
that it can do 2 scans
> of the index: one to get non-null data and a second to get null data. I don't
> know if the use case is prevalent enough to warrant the extra code though.
That would probably be harder, but useful. I thought about working on
it before but got sidetracked ont
On Thu, Mar 10, 2011 at 11:32 AM, Merlin Moncure wrote:
> On Thu, Mar 10, 2011 at 9:55 AM, Robert Haas wrote:
>> On Wed, Mar 9, 2011 at 6:01 PM, Jim Nasby wrote:
>>> Unfortunately, I don't think the planner actually has that level of
>>> knowledge.
>>
#x27;s how it works.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
f we had a way to do this for the rare cases where
it would be useful, but we don't.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to y
ng
done, so it IS using the indexes. Now that leaves open the question
of why it's not fast... but it's hard to guess the answer to that
question without seeing at least the EXPLAIN output, preferably
EXPLAIN ANALYZE.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise P
> Hmm ... I wonder whether this means that the current work on
> parallelizing I/O (the posix_fadvise patch in particular) is a dead
> end. Because what that is basically going to do is expend more CPU
> to improve I/O efficiency. If you believe this thesis then that's
> not the road we want to g
> Well, when select count(1) reads pages slower than my disk, its 16x + slower
> than my RAM. Until one can demonstrate that the system can even read pages
> in RAM faster than what disks will do next year, it doesn't matter much that
> RAM is faster. It does matter that RAM is faster for sorts,
> You guys are right. I tried "Miller" and gave me the same result. Is there
> any way to tune this so that for the common last names, the query run time
> doesn't jump from <1s to >300s?
> Thanks for the help!
Can you send the output of EXPLAIN ANALYZE for both cases?
...Robert
--
Sent via p
On Tue, Dec 9, 2008 at 1:11 PM, Joshua D. Drake wrote:
>> Those intel SSDs sound compelling. I've been waiting for SSDs to get
>> competitive price and performance wise for a while, and when the
>> intels came out and I read the first benchmarks I immediately began
>> scheming. Sadly, that was r
On Tue, Dec 16, 2008 at 2:32 PM, Jaime Casanova
wrote:
> we have a some bad queries (developers are working on that), some of
> them run in 17 secs and that is the average but when analyzing logs i
> found that from time to time some of them took upto 3 mins (the same
> query that normally runs in
> is the exact query... i think it will be removed later today because
> is a bad query anyway... but my fear is that something like happens
> even with good ones...
>
> maybe chekpoints could be the problem?
> i have 8.3.5 and condigured checkpoint_timeout in 15 minutes,
> chekpoint_segments 6 and
> as you see, explain analyze says it will execute in 175.952ms and
> because of network transfer of data executing this from pgadmin in
> another machine it runs for 17s... but from time to time pgFouine is
> shown upto 345.11 sec
Well, 86000 rows is not enough to give PostgreSQL a headache, even
On Tue, Dec 30, 2008 at 4:05 AM, Laszlo Nagy wrote:
> We have serveral table where the index size is much bigger than the table
> size.
You'll usually get index bloat in roughly the same measure that you
get table bloat. If you always (auto)vacuum regularly, then the
amount of bloat in your inde
On Tue, Dec 30, 2008 at 5:41 AM, Anton Bogdanovitch
wrote:
> I have to insert rows to table with 95% primary key unique_violation.
If you're inserting a lot of rows at once, I think you're probably
better off loading all of the data into a side table that does not
have a primary key, and then wri
On Wed, Dec 31, 2008 at 11:01 AM, Alvaro Herrera
wrote:
>> The point of a prepared statement IMHO is to do the planning only once.
>> There's necessarily a tradeoff between that and having a plan that's
>> perfectly adapted to specific parameter values.
>
> I think it has been shown enough times t
While looking at a complex query that is being poorly planned by
PostgreSQL 8.2.9, I discovered that any non-trivial CASE...WHEN
expression seems to produce a selectivity estimate of 0.005. This
also happens on HEAD.
psql (8.4devel)
Type "help" for help.
head=# create table tenk (c) as select ge
On Mon, Jan 5, 2009 at 11:40 PM, Tom Lane wrote:
> "Robert Haas" writes:
>> While looking at a complex query that is being poorly planned by
>> PostgreSQL 8.2.9, I discovered that any non-trivial CASE...WHEN
>> expression seems to produce a selectivity estimate of
> Where you *will* have some major OS risk is with testing-level software
> or "bleeding edge" Linux distros like Fedora. Quite frankly, I don't
> know why people run Fedora servers -- if it's Red Hat compatibility you
> want, there's CentOS.
I've had no stability problems with Fedora. The worst
On Sun, Jan 18, 2009 at 10:30 PM, Chris wrote:
> Hi all,
>
> I have a view that looks like this:
>
> SELECT
>CASE
>WHEN r.assetid IS NULL THEN p.assetid
>ELSE r.assetid
>END AS assetid,
>CASE
>WHEN r.userid IS NULL THEN p.userid
>
> I tried work_mem and maintenance_work_mem but it does not seem to make much
> difference yet. Admittedly I had set it to 100M and 80M, so after reading a
> little bit more I have found that I could easily set it to several GBs. But
> I am not sure those are the correct config parameters to use fo
> Is that how it works for an index as well? I just found out that I have an
> index that is 35GB, and the table is 85GB. ( I will look into the index, it
> works fine, but an index that is almost one third of the size of the table,
> seems a little bit strange. )
> So if it works the same way an
> I'm having a problem with a query that takes more or less 3.2 seconds to be
> executed.
>
> This query uses a view which encapsulates some calculations (in order to
> avoid duplicating theses calculations at several places in the project).
>
> In order to keep that post readable, I've put the vie
> Is there any tweaks to force pgsql to use index on description?
Even if you could force it to use the index, it wouldn't make the
query run faster.
As others have pointed out, what you really need is a different kind of index...
...Robert
--
Sent via pgsql-performance mailing list (pgsql-per
> My question: with that kind of volume and the underlying aggregation
> functions (by product id, dates, possibly IP addresses or at least
> countries of origin..) will PG ever be a good choice? Or should I be
> looking at some other kind of tools? I wonder if OLAP tools would be
> overkill for so
On Thu, Jan 29, 2009 at 10:58 AM, wrote:
> Hi,
>
> If I have a view like:
>
> create view X as (
> select x from A
> union all
> select x from B)
>
> and do
>
> select max(x) from X
>
> I get a plan like:
>
> Aggregate
> Append
> Seq Scan on A
> Seq Scan on B
>
> If A and B are indexed
On Thu, Jan 29, 2009 at 3:21 PM, henk de wit wrote:
> Hi,
> When I try to restore a database dump on PostgreSQL 8.3
> that's approximately 130GB in size and takes about 1 hour, I noticed index
> creation makes up the bulk of that time. I'm using a very fast I/O subsystem
> (16 Mtron Pro 7535 SSDs
On Thu, Jan 29, 2009 at 3:15 PM, Subbiah Stalin-XCGF84
wrote:
> I'm in the process of tuning a query that does a sort on a huge dataset.
> With work_mem set to 2M, i see the sort operation spilling to disk writing
> upto 430MB and then return the first 500 rows. Our query is of the sort
>
> select
On Mon, Feb 2, 2009 at 1:17 PM, Brian Cox wrote:
> I'm using 8.3.5. Table ts_defects has 48M rows. Through psql: delete from
> ts_defects;
> Result: out of memory/Can't allocate size: 32
> I then did 10 or so deletes to get rid of the rows. Afterwards, inserts into
> or queries on this
> table per
On Mon, Feb 2, 2009 at 3:01 PM, Brian Cox wrote:
>> How much memory do you have in your machine? What is work_mem set to?
>
> 32G; work_mem=64M
Hmm. Well then I'm not sure why you're running out of memory, that
seems like a bug. Taking a long time, I understand. Crashing, not so
much.
>> Did y
> It's the pending trigger list. He's got two trigger events per row,
> which at 40 bytes apiece would approach 4GB of memory. Apparently
> it's a 32-bit build of Postgres, so he's running out of process address
> space.
>
> There's a TODO item to spill that list to disk when it gets too large,
>
On Tue, Feb 3, 2009 at 4:17 PM, Tom Lane wrote:
> Alvaro Herrera writes:
>> Robert Haas escribió:
>>> Have you ever given any thought to whether it would be possible to
>>> implement referential integrity constraints with statement-level
>>> triggers instead o
On Wed, Feb 4, 2009 at 7:35 AM, Gregory Stark wrote:
> Robert Haas writes:
>
>> That's good if you're deleting most or all of the parent table, but
>> what if you're deleting 100,000 values from a 10,000,000 row table?
>> In that case maybe I'm
On Fri, Feb 6, 2009 at 11:14 AM, Tom Lane wrote:
> Mario Splivalo writes:
>> Besides PK and uq-constraint indices I have this index:
>
>> CREATE INDEX transactions_idx__client_data ON transactions
>> USING btree (transaction_client_id, transaction_destination_id,
>> transaction_operator_id, trans
On Fri, Feb 6, 2009 at 12:41 PM, Kevin Grittner
wrote:
>>>> Robert Haas wrote:
>> What's weird about this example is that when he sets enable_seqscan to
>> off, the bitmap index scan plan is actually substantially faster, even
>> though it in fact does sc
Just guessing here, but what values are you using for
join_collapse_limit and from_collapse_limit, and what happens if you
make them much bigger (like 100)?
...Robert
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.po
>>> effective_cache_size
>>
>> This is just a hint to tell the planner how much cache will generally be
>> available.
>
> ok, but available for what?
The documentation on these parameters is really very good.
http://www.postgresql.org/docs/8.3/interactive/runtime-config-query.html
http://www.post
201 - 300 of 693 matches
Mail list logo