On Thu, Mar 5, 2009 at 1:30 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Jonathan Hseu vom...@vomjom.net writes:
Sort (cost=11684028.44..11761274.94 rows=30898601 width=40)
Sort Key: time
- Bitmap Heap Scan on ticks (cost=715657.57..6995196.08 rows=30898601
width=40)
Recheck
On Tue, Mar 3, 2009 at 12:05 PM, Sebastjan Trepca tre...@gmail.com wrote:
Hey,
I have a table that links content together and it currently holds
about 17 mio records. Typical query is a join with a content table and
link table:
noovo-new=# explain analyze SELECT core_accessor.id,
On Tue, Mar 3, 2009 at 12:20 PM, Sebastjan Trepca tre...@gmail.com wrote:
But it's already attached in the first mail or am I missing something?
If you don't see it, check this: http://pastebin.com/d71b996d0
Woops, sorry, I thought you had sent plain EXPLAIN. I see it now.
The lowest level
On Tue, Mar 3, 2009 at 2:16 PM, Sebastjan Trepca tre...@gmail.com wrote:
Maybe this is useful, I removed the JOIN and it uses other
index(core_accessor_date_idx indexes (date_posted, nooximity)), but
its still hardly any better:
noovo-new=# explain analyze SELECT * FROM core_accessor WHERE
On Mon, Mar 2, 2009 at 1:22 PM, Brian Cox brian@ca.com wrote:
As you can see there are only 3 transactions and 1 starts 1 hour after
the drop begins. I'm still trying to figure out how to interpret the
pg_locks output, but (presumably) you/others on this forum have more
experience at this
On Sat, Feb 28, 2009 at 9:51 PM, Brian Cox brian@ca.com wrote:
Actually, they're all deadlocked. The question is why?
Here's a brief background. The ts_defects table is partitioned by occurrence
date; each partition contains the rows for 1 day. When the data gets old
enough, the partition
On Sat, Feb 28, 2009 at 11:20 AM, Alexander Staubo a...@bengler.no wrote:
On Fri, Feb 27, 2009 at 11:54 PM, Robert Haas robertmh...@gmail.com wrote:
The problem here is that the planner estimates the cost of a Limit
plan node by adding up (1) the startup cost of the underlying plan
node
On Fri, Feb 27, 2009 at 3:18 PM, Alexander Staubo a...@bengler.no wrote:
On Sun, Feb 15, 2009 at 5:45 PM, Alexander Staubo a...@bengler.no wrote:
On Sun, Feb 15, 2009 at 5:29 AM, David Wilson david.t.wil...@gmail.com
wrote:
On Sat, Feb 14, 2009 at 5:25 PM, Alexander Staubo a...@bengler.no
Just start up psql and type:
show work_mem;
(You could look in the config file too I suppose.)
...Robert
On Wed, Feb 25, 2009 at 1:53 PM, Farhan Husain russ...@gmail.com wrote:
On Wed, Feb 25, 2009 at 12:49 PM, Robert Haas robertmh...@gmail.com wrote:
You still haven't answered
Please note that this (1792MB) is the highest that I could set for work_mem.
Yeah, that's almost certainly part of your problem.
You need to make that number MUCH smaller. You probably want a value
like 1MB or 5MB or maybe if you have really a lot of memory 20MB.
That's insanely high.
On Wed, Feb 25, 2009 at 3:44 PM, Farhan Husain russ...@gmail.com wrote:
Initially, it was the default value (32MB). Later I played with that value
thinking that it might improve the performance. But all the values resulted
in same amount of time.
Well, if you set it back to what we consider to
On Wed, Feb 25, 2009 at 4:32 PM, Farhan Husain russ...@gmail.com wrote:
On Wed, Feb 25, 2009 at 3:30 PM, Robert Haas robertmh...@gmail.com wrote:
On Wed, Feb 25, 2009 at 3:44 PM, Farhan Husain russ...@gmail.com wrote:
Initially, it was the default value (32MB). Later I played with that
value
shared_buffers = 32MB # min 128kB or
max_connections*16kB
That's REALLY small for pgsql. Assuming your machine has at least 1G
of ram, I'd set it to 128M to 256M as a minimum.
As I wrote in a previous email, I had the value set to 1792MB (the highest I
could set) and
Here is the latest output:
ingentadb=# EXPLAIN ANALYZE select A0.Subj, A2.Obj From jena_g1t1_stmt A0,
jena_g1t1_stmt A1, jena_g1t1_stmt A2 Where
A0.Prop='Uv::http://prismstandard.org/namespaces/1.2/basic/isPartOf' AND
Can you please elaborate a bit?
I thought that A0.Prop would ignore the composite index created on the
columns subj and prop but this does not seem to be the case.
Yeah, I think you're barking up the wrong tree here. I think Tom had
the correct diagnosis - what do you get from show work_mem?
On Mon, Feb 23, 2009 at 7:26 AM, Kouber Saparev kou...@saparev.com wrote:
Now, recently I have altered some of the default parameters in order to get
as much as possible out of the hardware - 12 GB of RAM, 8 processors. So, I
guess I have done something wrong, thus the planner is taking that
On Sat, Feb 21, 2009 at 10:41 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Scott Carey sc...@richrelevance.com writes:
Are there any optimizations planned for the case where columns are
defined as NOT NULL?
We might get around to recognizing that case as an antijoin sometime.
It's nontrivial
First of all, you need to do some research on the benchmark kit itself,
rather than blindly downloading and using one. BenchmarkSQL has significant
bugs in it which affect the result. I can say that authoritatively as I
worked on/with it for quite awhile. Don't trust any result that comes
On Fri, Feb 20, 2009 at 4:34 PM, Jonah H. Harris jonah.har...@gmail.com wrote:
On Fri, Feb 20, 2009 at 3:40 PM, Merlin Moncure mmonc...@gmail.com wrote:
ISTM you are the one throwing out unsubstantiated assertions without
data to back it up. OP ran benchmark. showed hardware/configs, and
On Tue, Feb 17, 2009 at 11:17 AM, Alexander Gorban
alex.gor...@gmail.com wrote:
Hi,
I have table containing bytea and text columns. It is my storage for
image files and it's labels. Labels can be 'original' and 'thumbnail'.
I've C-function defined in *.so library and corresponding declaration
On Tue, Feb 17, 2009 at 12:46 PM, Alexander Gorban
alex.gor...@gmail.com wrote:
В Втр, 17/02/2009 в 12:24 -0500, Robert Haas пишет:
On Tue, Feb 17, 2009 at 11:17 AM, Alexander Gorban
alex.gor...@gmail.com wrote:
Hi,
I have table containing bytea and text columns. It is my storage
On Tue, Feb 10, 2009 at 5:31 AM, Rajesh Kumar Mallah
mallah.raj...@gmail.com wrote:
I have a query in which two huge tables (A,B) are joined using an indexed
column and a search is made on tsvector on some column on B. Very limited
rows of B are expected to match the query on tsvector column.
On Mon, Feb 9, 2009 at 10:44 AM, justin jus...@emproshunts.com wrote:
Matthew Wakeling wrote:
On Sat, 7 Feb 2009, justin wrote:
In a big databases a checkpoint could get very large before time had
elapsed and if server cashed all that work would be rolled back.
No. Once you commit a
effective_cache_size
This is just a hint to tell the planner how much cache will generally be
available.
ok, but available for what?
The documentation on these parameters is really very good.
http://www.postgresql.org/docs/8.3/interactive/runtime-config-query.html
On Fri, Feb 6, 2009 at 11:14 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Mario Splivalo mario.spliv...@megafon.hr writes:
Besides PK and uq-constraint indices I have this index:
CREATE INDEX transactions_idx__client_data ON transactions
USING btree (transaction_client_id,
On Fri, Feb 6, 2009 at 12:41 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
Robert Haas robertmh...@gmail.com wrote:
What's weird about this example is that when he sets enable_seqscan to
off, the bitmap index scan plan is actually substantially faster, even
though it in fact does scan
Just guessing here, but what values are you using for
join_collapse_limit and from_collapse_limit, and what happens if you
make them much bigger (like 100)?
...Robert
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
On Wed, Feb 4, 2009 at 7:35 AM, Gregory Stark st...@enterprisedb.com wrote:
Robert Haas robertmh...@gmail.com writes:
That's good if you're deleting most or all of the parent table, but
what if you're deleting 100,000 values from a 10,000,000 row table?
In that case maybe I'm better off
On Tue, Feb 3, 2009 at 4:17 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Alvaro Herrera alvhe...@commandprompt.com writes:
Robert Haas escribió:
Have you ever given any thought to whether it would be possible to
implement referential integrity constraints with statement-level
triggers instead
On Mon, Feb 2, 2009 at 1:17 PM, Brian Cox brian@ca.com wrote:
I'm using 8.3.5. Table ts_defects has 48M rows. Through psql: delete from
ts_defects;
Result: out of memory/Can't allocate size: 32
I then did 10 or so deletes to get rid of the rows. Afterwards, inserts into
or queries on this
On Mon, Feb 2, 2009 at 3:01 PM, Brian Cox brian@ca.com wrote:
How much memory do you have in your machine? What is work_mem set to?
32G; work_mem=64M
Hmm. Well then I'm not sure why you're running out of memory, that
seems like a bug. Taking a long time, I understand. Crashing, not so
It's the pending trigger list. He's got two trigger events per row,
which at 40 bytes apiece would approach 4GB of memory. Apparently
it's a 32-bit build of Postgres, so he's running out of process address
space.
There's a TODO item to spill that list to disk when it gets too large,
but
On Thu, Jan 29, 2009 at 10:58 AM, anders.blaaga...@nordea.com wrote:
Hi,
If I have a view like:
create view X as (
select x from A
union all
select x from B)
and do
select max(x) from X
I get a plan like:
Aggregate
Append
Seq Scan on A
Seq Scan on B
If A and B are
On Thu, Jan 29, 2009 at 3:21 PM, henk de wit henk53...@hotmail.com wrote:
Hi,
When I try to restore a database dump on PostgreSQL 8.3
that's approximately 130GB in size and takes about 1 hour, I noticed index
creation makes up the bulk of that time. I'm using a very fast I/O subsystem
(16
On Thu, Jan 29, 2009 at 3:15 PM, Subbiah Stalin-XCGF84
ssubb...@motorola.com wrote:
I'm in the process of tuning a query that does a sort on a huge dataset.
With work_mem set to 2M, i see the sort operation spilling to disk writing
upto 430MB and then return the first 500 rows. Our query is of
Is there any tweaks to force pgsql to use index on description?
Even if you could force it to use the index, it wouldn't make the
query run faster.
As others have pointed out, what you really need is a different kind of index...
...Robert
--
Sent via pgsql-performance mailing list
My question: with that kind of volume and the underlying aggregation
functions (by product id, dates, possibly IP addresses or at least
countries of origin..) will PG ever be a good choice? Or should I be
looking at some other kind of tools? I wonder if OLAP tools would be
overkill for
I tried work_mem and maintenance_work_mem but it does not seem to make much
difference yet. Admittedly I had set it to 100M and 80M, so after reading a
little bit more I have found that I could easily set it to several GBs. But
I am not sure those are the correct config parameters to use for
Is that how it works for an index as well? I just found out that I have an
index that is 35GB, and the table is 85GB. ( I will look into the index, it
works fine, but an index that is almost one third of the size of the table,
seems a little bit strange. )
So if it works the same way and
I'm having a problem with a query that takes more or less 3.2 seconds to be
executed.
This query uses a view which encapsulates some calculations (in order to
avoid duplicating theses calculations at several places in the project).
In order to keep that post readable, I've put the view
On Sun, Jan 18, 2009 at 10:30 PM, Chris dmag...@gmail.com wrote:
Hi all,
I have a view that looks like this:
SELECT
CASE
WHEN r.assetid IS NULL THEN p.assetid
ELSE r.assetid
END AS assetid,
CASE
WHEN r.userid IS NULL THEN p.userid
Where you *will* have some major OS risk is with testing-level software
or bleeding edge Linux distros like Fedora. Quite frankly, I don't
know why people run Fedora servers -- if it's Red Hat compatibility you
want, there's CentOS.
I've had no stability problems with Fedora. The worst
On Mon, Jan 5, 2009 at 11:40 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
While looking at a complex query that is being poorly planned by
PostgreSQL 8.2.9, I discovered that any non-trivial CASE...WHEN
expression seems to produce a selectivity estimate
While looking at a complex query that is being poorly planned by
PostgreSQL 8.2.9, I discovered that any non-trivial CASE...WHEN
expression seems to produce a selectivity estimate of 0.005. This
also happens on HEAD.
psql (8.4devel)
Type help for help.
head=# create table tenk (c) as select
On Wed, Dec 31, 2008 at 11:01 AM, Alvaro Herrera
alvhe...@commandprompt.com wrote:
The point of a prepared statement IMHO is to do the planning only once.
There's necessarily a tradeoff between that and having a plan that's
perfectly adapted to specific parameter values.
I think it has been
On Tue, Dec 30, 2008 at 4:05 AM, Laszlo Nagy gand...@shopzeus.com wrote:
We have serveral table where the index size is much bigger than the table
size.
You'll usually get index bloat in roughly the same measure that you
get table bloat. If you always (auto)vacuum regularly, then the
amount of
On Tue, Dec 30, 2008 at 5:41 AM, Anton Bogdanovitch
poison@gmail.com wrote:
I have to insert rows to table with 95% primary key unique_violation.
If you're inserting a lot of rows at once, I think you're probably
better off loading all of the data into a side table that does not
have a
On Tue, Dec 16, 2008 at 2:32 PM, Jaime Casanova
jcasa...@systemguards.com.ec wrote:
we have a some bad queries (developers are working on that), some of
them run in 17 secs and that is the average but when analyzing logs i
found that from time to time some of them took upto 3 mins (the same
is the exact query... i think it will be removed later today because
is a bad query anyway... but my fear is that something like happens
even with good ones...
maybe chekpoints could be the problem?
i have 8.3.5 and condigured checkpoint_timeout in 15 minutes,
chekpoint_segments 6 and
as you see, explain analyze says it will execute in 175.952ms and
because of network transfer of data executing this from pgadmin in
another machine it runs for 17s... but from time to time pgFouine is
shown upto 345.11 sec
Well, 86000 rows is not enough to give PostgreSQL a headache, even on
On Tue, Dec 9, 2008 at 1:11 PM, Joshua D. Drake j...@commandprompt.com wrote:
Those intel SSDs sound compelling. I've been waiting for SSDs to get
competitive price and performance wise for a while, and when the
intels came out and I read the first benchmarks I immediately began
scheming.
You guys are right. I tried Miller and gave me the same result. Is there
any way to tune this so that for the common last names, the query run time
doesn't jump from 1s to 300s?
Thanks for the help!
Can you send the output of EXPLAIN ANALYZE for both cases?
...Robert
--
Sent via
Hmm ... I wonder whether this means that the current work on
parallelizing I/O (the posix_fadvise patch in particular) is a dead
end. Because what that is basically going to do is expend more CPU
to improve I/O efficiency. If you believe this thesis then that's
not the road we want to go
Well, when select count(1) reads pages slower than my disk, its 16x + slower
than my RAM. Until one can demonstrate that the system can even read pages
in RAM faster than what disks will do next year, it doesn't matter much that
RAM is faster. It does matter that RAM is faster for sorts,
601 - 654 of 654 matches
Mail list logo