Hi, all.
All this optimizer vs hint thread reminded me about crazy idea that got to
my head some time ago.
I currently has two problems with postgresql optimizer
1) Dictionary tables. Very usual thing is something like select * from
big_table where distionary_id = (select id from dictionary where
Mladen Gogala schrieb:
Well, the problem will not go away. As I've said before, all other
databases have that feature and none of the reasons listed here
convinced me that everybody else has a crappy optimizer. The problem
may go away altogether if people stop using PostgreSQL.
A common
03.02.11 20:42, Robert Haas написав(ла):
2011/1/30 Віталій Тимчишинtiv...@gmail.com:
I was thinking if a table file could be deleted if it has no single live
row. And if this could be done by vacuum. In this case vacuum on table that
was fully updated recently could be almost as good as cluster
Andy Colson wrote:
Yes, I agree... for today. If you gaze into 5 years... double the
core count (but not the speed), double the IO rate. What do you see?
Four more versions of PostgreSQL addressing problems people are having
right now. When we reach the point where parallel query is the
I'm running all this on a 9.0 server with good enough hardware. The
query is:
SELECT news.id AS news_id
, news.layout_id
, news.news_relation_id
, news.author_id
Yes. And this has little to do with hints. It has to do with years
of development lead with THOUSANDS of engineers who can work on the
most esoteric corner cases in their spare time. Find the pg project a
couple hundred software engineers and maybe we'll catch Oracle a
little quicker.
Віталій Тимчишин wrote:
Hi, all.
All this optimizer vs hint thread
There is no optimizer vs. hint. Hints are a necessary part of the
optimizer in all other databases. Without hints Postgres will not get
used in the company that I work for, period. I was willing to wait but
the fatwa against
On 02/04/2011 07:56 AM, Mladen Gogala wrote:
Hints are a necessary part of the
optimizer in all other databases. Without hints Postgres will not get
used in the company that I work for, period.
I've said repeatedly that EnterpriseDB, a fork of PostgreSQL, has the
hints you seek, yet you seem
Mladen Gogala wrote:
I am even inclined to believe that deep down under the hood, this
fatwa has an ulterior motive, which disgusts me deeply. With hints,
there would be far fewer consulting gigs.
Now you're just being rude. Given that you have direct access to the
developers of the
On Thu, Feb 03, 2011 at 04:39:12PM -0800, da...@lang.hm wrote:
On Thu, 3 Feb 2011, Robert Haas wrote:
On Thu, Feb 3, 2011 at 3:54 PM, da...@lang.hm wrote:
with the current code, this is a completely separate process that knows
nothing about the load, so if you kick it off when you start the
Shaun Thomas wrote:
On 02/04/2011 07:56 AM, Mladen Gogala wrote:
Hints are a necessary part of the
optimizer in all other databases. Without hints Postgres will not get
used in the company that I work for, period.
I've said repeatedly that EnterpriseDB, a fork of PostgreSQL, has the
04.02.11 16:33, Kenneth Marshall написав(ла):
In addition, the streaming ANALYZE can provide better statistics at
any time during the load and it will be complete immediately. As far
as passing the entire table through the ANALYZE process, a simple
counter can be used to only send the required
Ivan Voras wrote:
The vanilla plan, with default settings is:
Pause here for a second: why default settings? A default PostgreSQL
configuration is suitable for systems with about 128MB of RAM. Since
you say you have good enough hardware, I'm assuming you have a bit
more than that. The
I am having huge performance problems with a table. Performance deteriorates
every day and I have to run REINDEX and ANALYZE on it every day. auto
vacuum is on. yes, I am reading the other thread about count(*) :)
but obviously I'm doing something wrong here
explain analyze select count(*)
On Fri, Feb 04, 2011 at 03:46:35PM +0100, felix wrote:
directly after REINDEX and ANALYZE:
Aggregate (cost=62348.70..62348.71 rows=1 width=0) (actual
time=15830.000..15830.000 rows=1 loops=1)
- Seq Scan on fastadder_fastadderstatus (cost=0.00..61613.16
rows=294216 width=0) (actual
On Thu, Feb 03, 2011 at 09:05:52PM -0500, Robert Haas wrote:
On Thu, Feb 3, 2011 at 8:37 PM, da...@lang.hm wrote:
On Thu, 3 Feb 2011, Robert Haas wrote:
On Thu, Feb 3, 2011 at 7:39 PM, ?da...@lang.hm wrote:
Yeah, but you'll be passing the entire table through this separate
process
felix wrote:
explain analyze select count(*) from fastadder_fastadderstatus;
Aggregate (cost=62458.73..62458.74 rows=1 width=0) (actual
time=77130.000..77130.000 rows=1 loops=1)
- Seq Scan on fastadder_fastadderstatus (cost=0.00..61701.18
rows=303018 width=0) (actual
On 02/04/2011 08:46 AM, felix wrote:
explain analyze select count(*) from fastadder_fastadderstatus;
Aggregate (cost=62458.73..62458.74 rows=1 width=0) (actual
time=77130.000..77130.000 rows=1 loops=1)
- Seq Scan on fastadder_fastadderstatus (cost=0.00..61701.18
rows=303018 width=0)
On 02/04/2011 08:56 AM, Greg Smith wrote:
PostgreSQL version? If you're running on 8.3 or earlier, I would be
suspicous that your Free Space Map has been overrun.
That's my first inclination. If he says autovacuum is running, there's
no way it should be bloating the table that much.
On 04 Feb, 2011,at 02:56 PM, Mladen Gogala mladen.gog...@vmsinfo.com wrote:
Віталій Тимчишин wrote:
Hi, all.
All this optimizer vs hint thread
There is no optimizer vs. hint. Hints are a necessary part of the
optimizer in all other databases.
That has nothing to do with PostgreSQL:
2011/2/4 Frank Heikens frankheik...@mac.com:
On 04 Feb, 2011,at 02:56 PM, Mladen Gogala mladen.gog...@vmsinfo.com
wrote:
Віталій Тимчишин wrote:
Hi, all.
All this optimizer vs hint thread
There is no optimizer vs. hint. Hints are a necessary part of the
optimizer in all other databases.
On Fri, Feb 4, 2011 at 6:05 AM, Grant Johnson gr...@amadensor.com wrote:
Yes. And this has little to do with hints. It has to do with years
of development lead with THOUSANDS of engineers who can work on the
most esoteric corner cases in their spare time. Find the pg project a
couple
sorry, reply was meant to go to the list.
-- Forwarded message --
From: felix crucialfe...@gmail.com
Date: Fri, Feb 4, 2011 at 5:17 PM
Subject: Re: [PERFORM] Really really slow select count(*)
To: stho...@peak6.com
On Fri, Feb 4, 2011 at 4:00 PM, Shaun Thomas stho...@peak6.com
reply was meant for the list
-- Forwarded message --
From: felix crucialfe...@gmail.com
Date: Fri, Feb 4, 2011 at 4:39 PM
Subject: Re: [PERFORM] Really really slow select count(*)
To: Greg Smith g...@2ndquadrant.com
On Fri, Feb 4, 2011 at 3:56 PM, Greg Smith
On Fri, Feb 04, 2011 at 05:20:27PM +0100, felix wrote:
reply was meant for the list
-- Forwarded message --
From: felix crucialfe...@gmail.com
Date: Fri, Feb 4, 2011 at 4:39 PM
Subject: Re: [PERFORM] Really really slow select count(*)
To: Greg Smith g...@2ndquadrant.com
On 02/04/2011 10:17 AM, felix wrote:
How big is this table when it's acting all bloated and ugly?
458MB
Wow! There's no way a table with 300k records should be that big unless
it's just full of text. 70-seconds seems like a really long time to read
half a gig, but that might be because
On 02/04/2011 10:03 AM, felix wrote:
max_fsm_pages | 153600 | Sets the
maximum number of disk pages for which free space is tracked.
max_fsm_relations | 1000 | Sets the
maximum number of tables and indexes for which free space is
On Fri, Feb 4, 2011 at 5:35 PM, Shaun Thomas stho...@peak6.com wrote:
vacuumdb -a -v -z vacuum.log
And at the end of the log, it'll tell you how many pages it wants, and how
many pages were available.
this is the dev, not live. but this is after it gets done with that table:
CPU
On Fri, Feb 4, 2011 at 10:38 AM, felix crucialfe...@gmail.com wrote:
On Fri, Feb 4, 2011 at 5:35 PM, Shaun Thomas stho...@peak6.com wrote:
vacuumdb -a -v -z vacuum.log
And at the end of the log, it'll tell you how many pages it wants, and how
many pages were available.
this is the dev,
vacuumdb -a -v -z -U postgres -W vacuum.log
that's all, isn't it ?
it did each db
8.3 in case that matters
the very end:
There were 0 unused item pointers.
0 pages are entirely empty.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO: analyzing public.seo_partnerlinkcategory
INFO:
On 02/04/2011 11:38 AM, felix wrote:
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO: analyzing public.fastadder_fastadderstatus
INFO: fastadder_fastadderstatus: scanned 2492 of 2492 pages,
containing 154378 live rows and 0 dead rows; 3 rows in sample,
154378 estimated total rows
and there's
vacuumdb -a -v -z -U postgres -W vacuum.log
Password:
Password:
Password:
Password:
Password:
Password:
Password:
Password:
Password:
Password:
Password:
cruxnu:nsbuildout crucial$
do you think its possible that it just doesn't have anything to complain
about ?
or the password is affecting it ?
You can run vacuum verbose on just the postgres database and get the
global numbers at the end. gotta be a superuser as well.
# \c postgres postgres
postgres=# vacuum verbose;
lots deleted.
DETAIL: A total of 7664 page slots are in use (including overhead).
7664 page slots are required to
On 02/04/2011 12:14 PM, felix wrote:
do you think its possible that it just doesn't have anything to
complain about ? or the password is affecting it ?
Why is it asking for the password over and over again? It shouldn't be
doing that. And also, are you running this as a user with superuser
felix wrote:
and do you agree that I should turn CLUSTER ON ?
I have no problem to stop all tasks to this table at night and just
reload it
You don't turn it on; it's a one time operation that does a cleanup. It
is by far the easiest way to clean up the mess you have right now.
Moving
On Fri, Feb 4, 2011 at 9:38 AM, Vitalii Tymchyshyn tiv...@gmail.com wrote:
Actually for me the main con with streaming analyze is that it adds
significant CPU burden to already not too fast load process.
Exactly.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise
On Fri, Feb 4, 2011 at 4:19 AM, Vitalii Tymchyshyn tiv...@gmail.com wrote:
Why do you expect such a invasive code changes? I know little about
postgresql code layering, but what I propose (with changing delete to
truncate) is:
1) Leave tuple addressing as it is now
i.e. a block number and a
On Fri, Feb 4, 2011 at 11:38 AM, Greg Smith g...@2ndquadrant.com wrote:
You don't turn it on; it's a one time operation that does a cleanup. It is
by far the easiest way to clean up the mess you have right now. Moving
forward, if you have max_fsm_pages set to an appropriate number, you
All,
Seeing an issue which is new on me. On a mostly idle PostgreSQL server,
the stats collector is rewriting the entire stats file twice per second.
Version: 8.4.4
Server: Ubuntu, kernel 2.6.32
Server set up: ApacheMQ server. 25 databases, each of which hold 2-3
tables.
Filesystem: Ext4,
On 02/04/2011 01:01 PM, Scott Marlowe wrote:
begin;
select * into temporaryholdingtable order by somefield;
truncate oldtable;
insert into oldtables select * from temporaryholdingtable;
commit;
That's usually how I do it, except for larger tables, I also throw in a
DROP INDEX for all the
On Fri, Feb 4, 2011 at 7:34 PM, Shaun Thomas stho...@peak6.com wrote:
Why is it asking for the password over and over again? It shouldn't be
doing that.
because I asked it to: -W
on the production server I need to enter password and I'm testing on dev
first.
I just sudo tried it but still no
On Fri, Feb 4, 2011 at 12:26 PM, felix crucialfe...@gmail.com wrote:
I just sudo tried it but still no report
It's not about who you are in Unix / Linux, it's about who you are in
Postgresql. \du will show you who is a superusr. psql -U username
will let you connect as that user.
--
Sent via
On 02/04/2011 01:26 PM, felix wrote:
because I asked it to: -W on the production server I need to enter
password and I'm testing on dev first.
Right. I'm just surprised it threw up the prompt so many times.
I just sudo tried it but still no report
Nono... you have to run the vacuum
ah right, duh.
yes, I did it as -U postgres, verified as a superuser
just now did it from inside psql as postgres
\c djns4
vacuum verbose analyze;
still no advice on the pages
On Fri, Feb 4, 2011 at 8:34 PM, Scott Marlowe scott.marl...@gmail.comwrote:
On Fri, Feb 4, 2011 at 12:26 PM, felix
On 02/04/2011 01:59 PM, felix wrote:
still no advice on the pages
I think it just hates you.
--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
stho...@peak6.com
__
See
it probably has good reason to hate me.
ns= SELECT n.nspname AS schema_name, c.relname AS table_name,
ns- c.reltuples AS row_count,
ns- c.relpages*8/1024 AS mb_used,
ns- pg_total_relation_size(c.oid)/1024/1024 AS total_mb_used
ns- FROM pg_class c
ns- JOIN pg_namespace n ON
On 02/04/2011 02:14 PM, felix wrote:
oh and there in the footnotes to django they say dont' forget to run
the delete expired sessions management every once in a while.
thanks guys.
Oh Django... :)
it won't run now because its too big, I can delete them from psql though
You might be better
On 02/03/2011 10:57 AM, gnuo...@rcn.com wrote:
For PG to prosper in the future, it has to embrace the
multi-core/processor/SSD machine at the query level
As the person who brought up the original concern, I'll add that
multi-core at the query level really isn't important for us. Most of
our
Greg (Smith),
Given your analysis of fsync'ing behavior on Ext3, would you say that it
is better to set checkpoint_completion_target to 0.0 on Ext3?
--
-- Josh Berkus
PostgreSQL Experts Inc.
On Fri, Feb 4, 2011 at 2:18 PM, Mark Stosberg m...@summersault.com wrote:
It looks like it's going to be trivial-- Divide up the data with a
modulo, and run multiple parallel cron scripts that each processes a
slice of the data. A benchmark showed that this approach sped up our
processing 3x
Greg Smith wrote:
Check out
http://www.indeed.com/jobtrends?q=postgres%2C+mysql%2C+oraclerelative=1relative=1
if you want to see the real story here. Oracle has a large installed
base, but it's considered a troublesome legacy product being replaced
+1 for Oracle being a troublesome
Mladen Gogala wrote:
Chris Browne wrote:
Well, the community declines to add hints until there is actual
consensus on a good way to add hints.
OK. That's another matter entirely. Who should make that decision? Is
there a committee or a person who would be capable of making that
I implemented table partitioning, and it caused havoc with a select
max(id) on the parent table - the query plan has changed from a
lightningly fast backwards index scan to a deadly seq scan. Both
partitions are set up with primary key index and draws new IDs from
the same sequence ... select
On 05/02/11 03:36, Mladen Gogala wrote:
Shaun, I don't need to convince you or the Postgres community. I
needed an argument to convince my boss.
My argument was that the sanctimonious and narrow minded Postgres
community is unwilling to even consider creating the tools I need for
large porting
Mladen Gogala wrote:
Actually, it is not unlike a religious dogma, only stating that hints
are bad. It even says so in the wiki. The arguments are
1) Refusal to implement hints is motivated by distrust toward users,
citing that some people may mess things up.
Yes, they can, with and
gnuo...@rcn.com writes:
Time for my pet meme to wiggle out of its hole (next to Phil's, and a
day later). For PG to prosper in the future, it has to embrace the
multi-core/processor/SSD machine at the query level. It has to. And
it has to because the Big Boys already do so, to some extent,
On 02/04/2011 10:41 AM, Tom Lane wrote:
1. Autovacuum fires when the stats collector's insert/update/delete
counts have reached appropriate thresholds. Those counts are
accumulated from messages sent by backends at transaction commit or
rollback, so they take no account of what's been done by
On 04/02/2011 15:44, Greg Smith wrote:
Ivan Voras wrote:
The vanilla plan, with default settings is:
Pause here for a second: why default settings? A default PostgreSQL
configuration is suitable for systems with about 128MB of RAM. Since you
say you have good enough hardware, I'm assuming you
This is a known limitation of partitioning. One solution is to use a
recursive stored proc, which can use indexes. Such a solution is
discussed here:
http://archives.postgresql.org/pgsql-performance/2009-09/msg00036.php
Regards,
Ken
On Fri, Feb 4, 2011 at 5:17 PM, Bruce Momjian br...@momjian.us wrote:
Mladen Gogala wrote:
characteristic of a religious community chastising a sinner. Let me
remind you again: all other major databases have that possibility:
Oracle, MySQL, DB2, SQL Server and Informix. Requiring burden of
2011/2/4 Mark Kirkwood mark.kirkw...@catalyst.net.nz:
Given that there are no hints, what do I do to solve the problem of a slow
query suddenly popping up in production? If and when this situation occurs,
see how quickly the community steps in to help you solve it (and it'd bet it
will solved
Tobias Brox wrote:
I implemented table partitioning, and it caused havoc with a select
max(id) on the parent table - the query plan has changed from a
lightningly fast backwards index scan to a deadly seq scan.
This problem was fixed in the upcoming 9.1:
O
If Oracle can patch their query planner for you in 24 hours, and you
can apply patch with confidence against your test then production
servers in an hour or so, great. Til then I'll stick to a database
that has the absolutely, without a doubt, best coder support of any
project I've ever used.
On Fri, 4 Feb 2011, Vitalii Tymchyshyn wrote:
04.02.11 16:33, Kenneth Marshall ???(??):
In addition, the streaming ANALYZE can provide better statistics at
any time during the load and it will be complete immediately. As far
as passing the entire table through the ANALYZE process, a
On Sat, Feb 5, 2011 at 12:46 AM, da...@lang.hm wrote:
Actually for me the main con with streaming analyze is that it adds
significant CPU burden to already not too fast load process. Especially if
it's automatically done for any load operation performed (and I can't see
how it can be enabled
On Fri, 4 Feb 2011, Chris Browne wrote:
2. The query needs to NOT be I/O-bound. If it's I/O bound, then your
system is waiting for the data to come off disk, rather than to do
processing of that data.
yes and no on this one.
it is very possible to have a situation where the process
66 matches
Mail list logo