On 29/12/2013 19:51, Jeff Janes wrote:
On Thursday, December 19, 2013, Marc Cousin wrote:
Yeah, I had forgotten to set it up correctly on this test environment
(its value is correctly set in production environments). Putting it to a
few gigabytes here gives me this cost
Hi,
I'm having something I feel is a bit of a limitation of the optimizer (or
something I don't understand :) ).
Sorry, this is a rather long mail.
I have a workaround for the problem below, but I don't like cheating the
optimizer for no good reason.
First a little bit of context, because
On 19/12/2013 19:33, Jeff Janes wrote:
QUERY PLAN
--
Nested Loop (cost=0.56..4001768.10
On 19/12/2013 21:36, Kevin Grittner wrote:
Marc Cousin cousinm...@gmail.com wrote:
Then we insert missing paths. This is one of the plans that fail
insert into path (path)
select path from batch
where not exists
(select 1 from path where path.path=batch.path)
group
Le Tue, 27 Sep 2011 12:45:00 +1000,
anthony.ship...@symstream.com a écrit :
On Monday 26 September 2011 19:39, Marc Cousin wrote:
Because Index Scans are sorted, not Bitmap Index Scans, which
builds a list of pages to visit, to be then visited by the Bitmap
Heap Scan step.
Marc
Le Tue, 27 Sep 2011 19:05:09 +1000,
anthony.ship...@symstream.com a écrit :
On Tuesday 27 September 2011 18:54, Marc Cousin wrote:
The thing is, the optimizer doesn't know if your data will be in
cache when you will run your query… if you are sure most of your
data is in the cache most
The Tuesday 01 March 2011 16:33:51, Tom Lane wrote :
Marc Cousin cousinm...@gmail.com writes:
Le mardi 01 mars 2011 07:20:19, Tom Lane a écrit :
It's worth pointing out that the only reason this effect is dominating
the runtime is that you don't have any statistics for these toy test
The Monday 28 February 2011 13:57:45, Heikki Linnakangas wrote :
On 28.02.2011 11:38, Marc Cousin wrote:
I've been facing a very large (more than 15 seconds) planning time in a
partitioned configuration. The amount of partitions wasn't completely
crazy, around 500, not in the thousands
The Monday 28 February 2011 16:35:37, Tom Lane wrote :
Marc Cousin cousinm...@gmail.com writes:
The Monday 28 February 2011 13:57:45, Heikki Linnakangas wrote :
Testing here with a table with 1000 columns and 100 partitions, about
80% of the planning time is looking up the statistics
Le mardi 01 mars 2011 07:20:19, Tom Lane a écrit :
Marc Cousin cousinm...@gmail.com writes:
The Monday 28 February 2011 16:35:37, Tom Lane wrote :
Could we see a concrete example demonstrating that? I agree with Heikki
that it's not obvious what you are testing that would have
Hi.
I hope I'm not going to expose an already known problem, but I couldn't find
it mailing list archives (I only found http://archives.postgresql.org/pgsql-
hackers/2009-12/msg01543.php).
On one of my (non production) machines, I've just seen a very big performance
regression (I was doing a
The Friday 04 June 2010 15:59:05, Tom Lane wrote :
Marc Cousin cousinm...@gmail.com writes:
I hope I'm not going to expose an already known problem, but I couldn't
find it mailing list archives (I only found
http://archives.postgresql.org/pgsql- hackers/2009-12/msg01543.php).
You sure
The few 'obvious' things I see :
ID and POLLID aren't of the same type (numeric vs bigint)
TTIME isn't indexed.
And as a general matter, you should stick to native datatypes if you don't
need numeric.
But as said in the other answer, maybe you should redo this schema and use
more consistent
It really has very little impact. It only affects index scans, and
even then only if effective_cache_size is less than the size of the
table.
Essentially, when this kicks in, it models the effect that if you are
index scanning a table much larger than the size of your cache, you
might have
On Thursday 16 July 2009 07:20:18 Marc Cousin wrote:
Le Thursday 16 July 2009 01:56:37, Devin Ben-Hur a écrit :
Marc Cousin wrote:
This mail contains the asked plans :
Plan 1
around 1 million records to insert, seq_page_cost 1, random_page_cost 4
- Hash (cost=425486.72
Le Thursday 16 July 2009 22:07:25, Kevin Grittner a écrit :
Marc Cousin cousinm...@gmail.com wrote:
the hot parts of these 2 tables are extremely likely to be in the
database or linux cache (buffer hit rate was 97% in the example
provided). Moreover, the first two queries of the insert
Le Thursday 16 July 2009 23:54:54, Kevin Grittner a écrit :
Marc Cousin cousinm...@gmail.com wrote:
to sum it up, should I keep these values (I hate doing this :) ) ?
Many people need to set the random_page_cost and/or seq_page_cost to
reflect the overall affect of caching on the active
This mail contains the asked plans :
I've done them with the different configurations, as I had done the effort of
setting up the whole thing :)
Stats were updated between all runs. Each time is the first run of the query
(that's what we have in production with bacula)
And I added the executor
Le Wednesday 15 July 2009 15:45:01, Alvaro Herrera a écrit :
Marc Cousin escribió:
There are other things I am thinking of : maybe it would be better to
have sort space on another (and not DBRD'ded) raid set ? we have a quite
cheap setup right now for the database, and I think maybe
Le Thursday 16 July 2009 01:56:37, Devin Ben-Hur a écrit :
Marc Cousin wrote:
This mail contains the asked plans :
Plan 1
around 1 million records to insert, seq_page_cost 1, random_page_cost 4
- Hash (cost=425486.72..425486.72 rows=16746972 width=92)
(actual time=23184.196
Le Tuesday 14 July 2009 10:15:21, vous avez écrit :
Marc Cousin wrote:
Your effective_cache_size is really small for the system you seem to
have - its the size of IO caching your os is doing and uses no resources
itself. And 800MB of that on a system with that amount of data seems a
bit
Le Tuesday 14 July 2009 10:23:25, Richard Huxton a écrit :
Marc Cousin wrote:
Temporarily I moved the problem at a bit higher sizes of batch by
changing random_page_cost to 0.02 and seq_page_cost to 0.01, but I feel
like an apprentice sorcerer with this, as I told postgreSQL that fetching
hesitate to ask me more.
Thanks for helping.
Marc Cousin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
We regularly do all of dbcheck. This is our real configuration, there are
really lots of servers and lots of files (500 million files backed up every
month).
But thanks for mentionning that.
The thing is we're trying to improve bacula with postgresql in order to make
it able to bear with this
While this is not your questions, I still noticed you seem to be on 8.3 -
it might be a bit faster to use GROUP BY instead of DISTINCT.
It didn't do a big difference, I already tried that before for this query.
Anyway, as you said, it's not the query having problems :)
Your
It's not that trivial with Oracle either. I guess you had to use shared
servers to get to that amount of sessions. They're most of the time not
activated by default (dispatchers is at 0).
Granted, they are part of the 'main' product, so you just have to set up
dispatchers, shared servers,
Le Sunday 30 November 2008 19:45:11 tmp, vous avez écrit :
I am struggeling with the following query which fetches a random subset
of 200 questions that matches certain tags within certain languages.
However, the query takes forever to evaluate, even though I have a
limit 200 appended. Any
Le Wednesday 11 July 2007 22:35:31 Tom Lane, vous avez écrit :
Marc Cousin [EMAIL PROTECTED] writes:
Nevertheless, shouldn't the third estimate be smaller or equal to the sum
of the two others ?
The planner's estimation for subplan conditions is pretty primitive
compared to joinable
Hi,
I'm having a weird problem on a query :
I've simplified it to get the significant part (see end of message).
The point is I've got a simple
SELECT field FROM table WHERE 'condition1'
Estimated returned rows : 5453
Then
SELECT field FROM table WHERE 'condition2'
Estimated returned rows : 705
Hi...
Bacula does no transaction right now, so every insert is done separately with
autocommit.
Moreover, the insert loop for the main table is done by several individual
queries to insert data in several tables (filename, dir, then file), so this
is slow.
There's work underway to speed that up,
I hope I'm not going to say stupid things, but here's what i know (or i think
i know :) ) about bacula + postgresql
If I remember correctly (I allready discussed this with Kern Sibbald a while
ago), bacula does each insert in its own transaction : that's how the program
is done, and of course
Hi,
I've been working on trying to partition a big table (I've never partitioned a
table in any other database till now).
Everything went ok, except one query that didn't work afterwards.
I've put the partition description, indexes, etc ..., and the explain plan
attached.
The query is
] wrote:
Did you set constraint_exclusion = true in postgresql.conf file?
On 12/13/05, Marc Cousin [EMAIL PROTECTED] wrote:
Hi,
I've been working on trying to partition a big table (I've never
partitioned a table in any other database till now).
Everything went ok, except one
Le Mercredi 02 Novembre 2005 14:54, Magnus Hagander a écrit :
I've done the tests with rc1. This is still as slow on windows ...
about
6-10
times slower thant linux (via Ip socket). (depending on
using prepared
queries, etc...)
By the way, we've tried to insert into
On Tuesday 06 September 2005 19:11, Merlin Moncure wrote:
This makes me wonder if we are looking in the wrong place. Maybe the
problem is coming from psql? More results to follow.
problem is not coming from psql.
One thing I did notice that in a 250k insert transaction the insert time
Hi,
I usually use PostgreSQL coupled with Linux, but I have to use Windows for a
perticular project.
So I wanted to do some tests to know if the performance will be acceptable (I
don't need PostgreSQL to be as fast with windows as with linux, but it has to
be usable...).
I started with
In my experience win32 is par with linux generally with a few gotchas on
either side. Are your times with fsync=no? It's much harder to give
apples-apples comparison with fsync=on for various reasons.
It is with fsync=off on windows, fsync=on on linux
Are you running
Hi. I hope I'm not asking a too trivial question here...
I'm having trouble with a (quite big) query, and can't find a way to make it
faster.
Here is the information :
Tables :
sces_vte - 2753539 rows
sces_art - 602327
sces_fsf - 8126
sces_frc - 7763
sces_tps - 38
2004 16:40, you wrote:
Marc Cousin [EMAIL PROTECTED] writes:
I'm having trouble with a (quite big) query, and can't find a way to make it
faster.
Seems like it might help if the thing could use a HashAggregate instead
of sort/group. Numeric is not hashable, so having those TO_NUMBER
On Wednesday 08 September 2004 16:56, you wrote:
Marc Cousin [EMAIL PROTECTED] writes:
The query has been generated by business objects ... i'ill try to suggest to the
developpers to remove this constant (if they can)...
The fields used by the sort are of type numeric(6,0) or (10,0
40 matches
Mail list logo