Thanks for the explanation!
Best Regards,
Marc-Olaf
Marc-Olaf Jaschke · Softwareentwickler
shopping24 GmbH
Werner-Otto-Straße 1-7 · 22179 Hamburg
Telefon: +49 (0) 40 6461 5830 · Fax: +49 (0) 40 64 61 7879
marc-olaf.jasc...@s24.com · www.s24.com
AG Hamburg HRB 63371
vertreten durch Dr. Björn
Symbol
51,06% postgres [.] pglz_decompress
7,33% libc-2.12.so [.] memcpy
...
= End of example =
I wonder why bitmap heap scan adds such a big amount of time on top of the
plain bitmap index scan.
It seems to me, that the recheck is active although all blocks
ver your query can still be optimized:
=>
select count(*)
from claims
where exists (select *
from unnest("ICD9_DGNS_CD") x_
where x_ like '427%'
)
regards,
Marc Mamin
> So I figured I'd create a Function to encapsulate the concept:
>
ith small pending lists : is there a concurrency problem, or can both tasks
cleanly work in parallel ?
best regards,
Marc mamin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
anity further than not cleaning the pending list?
As I understand it, this list will be merged into the index automatically when
it get full, independently from the vaccum setting.
Can it be an index bloating issue ?
and last but not least, can I reduce the problem by configuration ?
regards,
hen dealing with large tables.
here is a good starting link for this topic:
http://stackoverflow.com/questions/12604744/does-the-order-of-columns-in-a-postgres-table-impact-performance
regards,
Marc Mamin
r($2,
> 1)), greatest(array_upper($1, 1),array_upper($2, 1)), 1) AS i
> ) sub
>GROUP BY i
>ORDER BY i
>);
> $$ LANGUAGE sql STRICT IMMUTABLE;
it seems that both the GROUP and ORDER BY are superfluous and adding some
cycles.
regards,
Marc Mamin
--
Sent vi
lead_id"
FROM "event" U1
WHERE U1."event_type" ='type_1'
UNION (
SELECT U1."lead_id" AS "lead_id"
FROM "event" U1
WHERE U1."event_type" = 'type_2'
INTERSECT
gged
>tables for output.
>
>Take a quick peek here:
>https://github.com/gbb/par_psql/blob/master/BENCHMARKS.md
>
>I'm wondering what I'm missing here. Any ideas?
>
>Graeme.
>
auto explain might help giving some insight in what's going on:
http://www.postgresql
ful to put your result here:
http://explain.depesz.com/
regards,
Marc Mamin
>
>===
>
>
>Nested Loop (cost=33666.96..37971.39 rows=1 width=894) (actual
>time=443.556..966558.767 rows=45360 loops=1)
> Jo
e;
pg_index;
pg_constraint;
regards,
Marc Mamin
>- no other processes are likely to be interfering; nothing other than
>PostgreSQL runs on this machine (except for normal OS processes and New Relic
>server monitoring service); concurrent activity in PostgreSQL is low-level and
&g
>Thanks, best regards,
>- Gulli
>
Hi,
I've no clue for the time required by EXPLAIN
but some more information are probably relevant to find an explanation:
- postgres version
- number of rows inserted by the query
- how clean is your catalog in regard to vacuum
( can you run vacuum full verbose & analyze it, and then retry the analyze
statement ?)
- any other process that may interfere, e.g. while locking some catalog tables ?
- statistic target ?
- is your temp table analyzed?
- any index on it ?
We have about 300'000 entries in our pg_class tables, and I've never seen such
an issue.
regards,
Marc Mamin
iven table:
"If more than one trigger is defined for the same event on the same relation,
the triggers will be fired in alphabetical order by trigger name"
( http://www.postgresql.org/docs/9.3/static/trigger-definition.html )
regards,
Marc Mamin
AlexK987 writes:
>>> I've created a GIN index on an INT[] column, but it slows down the selects.
>>> Here is my table:
>>
>>> create table talent(person_id INT NOT NULL,
>>> skills INT[] NOT NULL);
>>
>>> insert into talent(person_id, skills)
>>> select generate_series, array[0, 1] || generate_ser
uivalent and fast:
explain analyze
WITH rare AS (
select * from talent
where skills @> array[15])
select * from rare
where skills @> array[1]
-- (with changed operator)
You might variate your query according to an additional table that keeps the
occurrence count of all skills.
Not re
nts e1
join events e2 on e1.session_id = e2.session_id and e1.type = e2.type
where
e1.product_id = '82503'
group by e2.product_id, e2.site_id
)
SELECT
'82503' as product_id_1,
site_id,
product_id,
view_count,
purchase_count
FROM SALL
WHERE product_i
I ran into this oddity lately that goes against everything I thought I
understood and was wondering if anyone had any insight. Version/env
details at the end.
The root of it is these query times:
marcs=# select * from ccrimes offset 514 limit 1;
[...data omitted...]
(1 row)
Time: 650.280 ms
space as Postgres is very efficient about
NULLs storage:
It marks all null values in a bit map within the row header so you just need
about one bit per null
instead of 4 bytes for zeros, and hence get rid of your I/O issue.
regards,
Marc Mamin
Von: pgsql
[Craig]
>>If you haven't looked at clustering algorithms yet, you might want to do so.
>>Your problem is a special case of clustering, where you have a large number
>>of small clusters. A good place to start is the overview on Wikipedia:
>>http://en.wikipedia.org/wiki/Cluster_analysis
According t
ace concurrently.
To reduce I/O due to swap, you can consider increasing maintenance_work_mem on
the connextions/sessionns
that build the indexes.
regards,
Marc Mamin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
will probably start again soon.
Would it make sense to use a tool like e4defrag
(http://www.linux.org/threads/online-defragmentation.4121/)
in order to defrag the free space ?
And how safe is it to use such a tool against a running postgres instance?
many thanks,
Marc Mamin
--
Sent via
On 29/12/2013 19:51, Jeff Janes wrote:
> On Thursday, December 19, 2013, Marc Cousin wrote:
>
>
>
> Yeah, I had forgotten to set it up correctly on this test environment
> (its value is correctly set in production environments). Putting it to a
> few gigabyt
On 19/12/2013 21:36, Kevin Grittner wrote:
Marc Cousin wrote:
Then we insert missing paths. This is one of the plans that fail
insert into path (path)
select path from batch
where not exists
(select 1 from path where path.path=batch.path)
group by path;
I know you
On 19/12/2013 19:33, Jeff Janes wrote:
> QUERY PLAN
>
> --
> Nested Loop (cost=0.56..4001768.1
y using very low values for seq_page_cost and
random_page_cost for these 2 queries. I just feel that maybe PostgreSQL could
do a bit better here, so I wanted to submit this use case for discussion.
Regards
Marc
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Hello,
Does anything speaks again adding a "WITH FREEZE" option to "CREATE TABLE AS" ,
similar to the new COPY FREEZE feature ?
best regards,
Marc Mamin
course bullsh... It has nothing to do with immutability and can
only applies to few cases
e.g: it's fine for select x+1 ... group by x,
but not for select x^2 ... group by x
Marc Mamin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make chang
internally ?
best regards,
Marc Mamin
here an example to highlight possible performance loss:
create temp table ref ( i int, r int);
create temp table val ( i int, v int);
insert into ref select s,s%2 from generate_series(1,1)s;
insert into val select s,s%2 from generate_series(1,1)s;
create
Von: Stefan Keller [sfkel...@gmail.com]
>Gesendet: Samstag, 20. Juli 2013 01:55
>
>Hi Marc
>
>Thanks a lot for your hint!
>
>You mean doing a "SET track_counts (true);" for the whole session?
No,
I mean
ALTER TABLE
age-id/27953.1329434...@sss.pgh.pa.us
as a comment on
http://www.postgresql.org/message-id/c4dac901169b624f933534a26ed7df310861b...@jenmail01.ad.intershop.net
regards,
Marc Mamin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
T OUTER JOIN table3 T3_2 ON t2.third_id = T3_2.id
ORDER BY T1.mycolumn2,T1.id
regards,
Marc Mamin
Von: pgsql-performance-ow...@postgresql.org
[pgsql-performance-ow...@postgresql.org]" im Auftrag von "Brian
Fehrle [bri...@consistentstate.co
ate index i_2 on foo where session_id%4 =2;
create index i_3 on foo where session_id%4 =3;
(can be built in parallel using separate threads)
Then you will have to ensure that all your WHERE clauses also contain the index
condition:
WHERE session_id = 27 AND session_id%4 =27%4
regards,
Marc Mamin
to concatenate the georef
within the index, but keep them separated, or even keep them in different
indexes.
Which is the best depend on the other queries running against this table
HTH,
Marc Mamin
-Original Message-
From: pgsql-performance-ow...@postgresql.org on behalf of Ioannis
An
but still have to clean garbage nad moved to prepared for the next but one in
the background
best regards,
Marc Mamin
>>>
>>> I wonder, what is the fastest way to accomplish this kind of task in
>>> PostgreSQL. I am interested in
>>> the fas
> -Original Message-
> From: Pavel Stehule [mailto:pavel.steh...@gmail.com]
>
> 2012/6/26 Marc Mamin :
> >
> >>> On 22/06/12 09:02, Maxim Boguk wrote:
> >
> >>> May be I completely wrong but I always assumed that the access
> speed to
ble t2 ( _array int[]);
alter table t2 alter _array set storage external;
insert into t2 SELECT ARRAY(SELECT * FROM generate_series(1,500));
explain analyze SELECT _array[1] FROM t1;
Total runtime: 0.125 ms
explain analyze SELECT _array[1] FROM t2;
Total runtime: 8.649 ms
best regards,
M
Thanks for pointing me to that article. I totally forgot that the postgres wiki
existed.
Updating is not an option at the moment, but we'll probably do so in the
future. Until then I can
live with the workaround.
Kind regards,
Marc
--
Sent via pgsql-performance mailing list (
t
(cost=0.00..20944125.72 rows=1031020672
width=8)
I would expect this to run half an hour or so, completely overloading the
server...
Any Ideas?
Kind regards
Marc
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
> Von: Robert Haas [mailto:robertmh...@gmail.com]
> Gesendet: Mi 2/29/2012 7:32
>
> On Mon, Feb 6, 2012 at 6:05 AM, Marc Mamin wrote:
> > without analyze: http://explain.depesz.com/s/6At
> > with analyze:http://explain.depesz.com/s/r3B
...
> The problem seems to
position.
So I repeated the test with an additional search term at the last
position, but without significant change:
(result from the 6. test below)
without analyze: http://explain.depesz.com/s/6At
with analyze:http://explain.depesz.com/s/r3B
best regards,
Marc Mamin
Here all my results
sce(toplevelrid,msoffset::varchar);
without stats: http://explain.depesz.com/s/qPg
with stats: http://explain.depesz.com/s/88q
aserr_20120125_tvi: GIN Index on my_func(.,.,.,.,.,.)
best regards,
Marc Mamin
> -Original Message-
> From: pgsql-performance-ow...@postgre
l fix that model, but am first looking for a quick way to restore
performance on our production servers.
best regards,
Marc Mamin
he main
table with a short one line SQL delete statement before the
interpolation and merge.
> Tada.
:-
> Enjoy !
I certainly will. Many thanks for those great lines of SQL!
Hope you recover from your flu quickly!
All the best,
Marc
--
Sent via pgsql-performance mailing list (pgsql-
imp(t_value,t_record,output_id) where t_imp.id is
not null.
regards,
Marc Mamin
-Ursprüngliche Nachricht-
Von: pgsql-performance-ow...@postgresql.org im Auftrag von Jochen Erwied
Gesendet: Sa 1/7/2012 12:57
An: anto...@inaps.org
Cc: pgsql-performance@postgresql.org
Betreff: Re: [PERF
On 6 January 2012 20:38, Samuel Gendler wrote:
> On Fri, Jan 6, 2012 at 12:22 PM, Marc Eberhard
> wrote:
>> On 6 January 2012 20:02, Samuel Gendler wrote:
>> > Have you considered doing the insert by doing a bulk insert into a temp
>> > table and then pulling rows
t
only worth doing this for a large number of inserted/updated elements?
What if the number of inserts/updates is only a dozen at a time for a
large table (>10M entries)?
Thanks,
Marc
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
USD 12.0
=>
x overview
- ----
a {(EUR,20.0), (CHF,7.5)}
b {(USD,10.0)}
regards,
Marc Mamin
> On 12/14/2011 11:21 AM, Marc Mamin wrote:
> > Hello,
> >
> > For such cases (see below), it would be nice to have an unnest
> function that on
ACH version.
CREATE OR REPLACE FUNCTION input_value_un (in_inputs numeric[], in_input_nr
numeric)
RETURNS numeric AS
$BODY$
SELECT u[1][2]
FROM unnest($1, SLICE =1) u
WHERE u[1][1]=in_input_nr
LIMIT 1;
$BODY$
LANGUAGE sql IMMUTABLE;
best regards,
Marc Ma
Le Tue, 27 Sep 2011 19:05:09 +1000,
anthony.ship...@symstream.com a écrit :
> On Tuesday 27 September 2011 18:54, Marc Cousin wrote:
> > The thing is, the optimizer doesn't know if your data will be in
> > cache when you will run your query… if you are sure most of your
>
Le Tue, 27 Sep 2011 12:45:00 +1000,
anthony.ship...@symstream.com a écrit :
> On Monday 26 September 2011 19:39, Marc Cousin wrote:
> > Because Index Scans are sorted, not Bitmap Index Scans, which
> > builds a list of pages to visit, to be then visited by the Bitmap
>
#x27;cdr'::text)
> -> Bitmap Index Scan on tevent_cdr_timestamp
> (cost=0.00..57.31 rows=2477 width=0) (actual time=0.404..0.404
> rows=2480 loops=1)
> Index Cond: (("timestamp" >= '2011-09-09
> 22:00:00+10'::timestamp with
dat);
exception
when unique_violation then
update t set dat = a_dat where id = a_id and dat <> a_dat;
return 0;
end;
elsif not test then
update t set dat = a_dat where id = a_id;
return 0;
end if;
return 1;
best regards,
Marc Mamin
-Ursp
CT p.page_id
FROM mediawiki.page p
JOIN mediawiki.revision r on (p.page_id=r.rev_page)
JOIN mediawiki.pagecontent ss on (r.rev_id=ss.old_id)
WHERE (ss.textvector @@ (to_tsquery('fotbal')))
HTH,
Marc Mamin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
The Tuesday 01 March 2011 16:33:51, Tom Lane wrote :
> Marc Cousin writes:
> > Le mardi 01 mars 2011 07:20:19, Tom Lane a écrit :
> >> It's worth pointing out that the only reason this effect is dominating
> >> the runtime is that you don't have any statistic
Le mardi 01 mars 2011 07:20:19, Tom Lane a écrit :
> Marc Cousin writes:
> > The Monday 28 February 2011 16:35:37, Tom Lane wrote :
> >> Could we see a concrete example demonstrating that? I agree with Heikki
> >> that it's not obvious what you are testing that
The Monday 28 February 2011 16:35:37, Tom Lane wrote :
> Marc Cousin writes:
> > The Monday 28 February 2011 13:57:45, Heikki Linnakangas wrote :
> >> Testing here with a table with 1000 columns and 100 partitions, about
> >> 80% of the planning time is looking up
The Monday 28 February 2011 13:57:45, Heikki Linnakangas wrote :
> On 28.02.2011 11:38, Marc Cousin wrote:
> > I've been facing a very large (more than 15 seconds) planning time in a
> > partitioned configuration. The amount of partitions wasn't completely
> >
fast in 8.4.
Best regards,
Marc
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Major-performance-problem-after-upgrade-from-8-3-to-8-4-tp2796390p3329435.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.
--
Sent via pgsql-performance
Hello,
UNION will remove all duplicates, so that the result additionally
requires to be sorted.
Anyway, for performance issues, you should always start investigation
with explain analyze .
regards,
Marc Mamin
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow
Another point: would a conditionl index help ?
on articles (context_key) where indexed
regards,
-Ursprüngliche Nachricht-
Von: pgsql-performance-ow...@postgresql.org im Auftrag von Marc Mamin
Gesendet: Mi 12/8/2010 9:06
An: Shrirang Chitnis; Bryce Nesbitt; pgsql-performance
ithin the given transaction.
regards,
Marc Mamin
-Ursprüngliche Nachricht-
Von: pgsql-performance-ow...@postgresql.org im Auftrag von Shrirang Chitnis
Gesendet: Mi 12/8/2010 8:05
An: Bryce Nesbitt; pgsql-performance@postgresql.org
Betreff: Re: [PERFORM] hashed subplan 5000x slower tha
formula on the fly.
best regards,
Marc Mamin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
No, CONCURRENTLY is to improve table availability during index creation, but it
degrades the performances.
best regards,
Marc Mamin
-Original Message-
From: Alex Hunsaker [mailto:bada...@gmail.com]
Sent: Donnerstag, 11. November 2010 19:55
To: Marc Mamin
Cc: pgsql-performance
naive, but why can't posgres use multiple
threads for large sort operation ?
best regards,
Marc Mamin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
The Friday 04 June 2010 15:59:05, Tom Lane wrote :
> Marc Cousin writes:
> > I hope I'm not going to expose an already known problem, but I couldn't
> > find it mailing list archives (I only found
> > http://archives.postgresql.org/pgsql- hackers/2009-12/msg01543
or
software RAID).
Here is the trivial test :
The configuration is the default configuration, just after initdb
CREATE TABLE test (a int);
CREATE INDEX idxtest on test (a);
with wal_sync_method = open_datasync (new default)
marc=# INSERT INTO test SELECT generate_series(1,10);
Hello,
I didn't try it, but following should be slightly faster:
COUNT( CASE WHEN field >= x AND field < y THEN true END)
intead of
SUM( CASE WHEN field >= x AND field < y THEN 1 ELSE 0 END)
HTH,
Marc Mamin
From: pgsql-performance-ow.
The few 'obvious' things I see :
ID and POLLID aren't of the same type (numeric vs bigint)
TTIME isn't indexed.
And as a general matter, you should stick to native datatypes if you don't
need numeric.
But as said in the other answer, maybe you should redo this schema and use
more consistent d
> It really has very little impact. It only affects index scans, and
> even then only if effective_cache_size is less than the size of the
> table.
>
> Essentially, when this kicks in, it models the effect that if you are
> index scanning a table much larger than the size of your cache, you
> migh
Le Thursday 16 July 2009 23:54:54, Kevin Grittner a écrit :
> Marc Cousin wrote:
> > to sum it up, should I keep these values (I hate doing this :) ) ?
>
> Many people need to set the random_page_cost and/or seq_page_cost to
> reflect the overall affect of caching on the act
Le Thursday 16 July 2009 22:07:25, Kevin Grittner a écrit :
> Marc Cousin wrote:
> > the hot parts of these 2 tables are extremely likely to be in the
> > database or linux cache (buffer hit rate was 97% in the example
> > provided). Moreover, the first two queries of
On Thursday 16 July 2009 07:20:18 Marc Cousin wrote:
> Le Thursday 16 July 2009 01:56:37, Devin Ben-Hur a écrit :
> > Marc Cousin wrote:
> > > This mail contains the asked plans :
> > > Plan 1
> > > around 1 million records to insert,
Le Thursday 16 July 2009 01:56:37, Devin Ben-Hur a écrit :
> Marc Cousin wrote:
> > This mail contains the asked plans :
> > Plan 1
> > around 1 million records to insert, seq_page_cost 1, random_page_cost 4
> >
> > -> Hash (cost=425486.72..425486.72
Le Wednesday 15 July 2009 15:45:01, Alvaro Herrera a écrit :
> Marc Cousin escribió:
> > There are other things I am thinking of : maybe it would be better to
> > have sort space on another (and not DBRD'ded) raid set ? we have a quite
> > cheap setup right now for the d
ar work.
There are other things I am thinking of : maybe it would be better to have sort
space on another (and not DBRD'ded) raid set ? we have a quite
cheap setup right now for the database, and I think maybe this would help scale
better. I can get a filesystem in another volume group, whi
Le Tuesday 14 July 2009 10:23:25, Richard Huxton a écrit :
> Marc Cousin wrote:
> > Temporarily I moved the problem at a bit higher sizes of batch by
> > changing random_page_cost to 0.02 and seq_page_cost to 0.01, but I feel
> > like an apprentice sorcerer with this, as I
Le Tuesday 14 July 2009 10:15:21, vous avez écrit :
> Marc Cousin wrote:
> >> Your effective_cache_size is really small for the system you seem to
> >> have - its the size of IO caching your os is doing and uses no resources
> >> itself. And 800MB of that on a system
>
> While this is not your questions, I still noticed you seem to be on 8.3 -
> it might be a bit faster to use GROUP BY instead of DISTINCT.
It didn't do a big difference, I already tried that before for this query.
Anyway, as you said, it's not the query having problems :)
> Your effective_cac
We regularly do all of dbcheck. This is our real configuration, there are
really lots of servers and lots of files (500 million files backed up every
month).
But thanks for mentionning that.
The thing is we're trying to improve bacula with postgresql in order to make
it able to bear with this
with bacula ...
effective_cache_size = 800MB
default_statistics_target = 1000
PostgreSQL is 8.3.5 on Debian Lenny
I'm sorry for this very long email, I tried to be as precise as I could, but
don't hesitate to ask me more.
Thanks for helping.
Marc Cousin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
It's not that trivial with Oracle either. I guess you had to use shared
servers to get to that amount of sessions. They're most of the time not
activated by default (dispatchers is at 0).
Granted, they are part of the 'main' product, so you just have to set up
dispatchers, shared servers, circu
Hello Matthew,
Another idea:
Are your objects limited to some smaller ranges of your whole interval ?
If yes you may possibly reduce the ranges to search for while using an
additional table with the min(start) max(end) of each object...
Marc Mamin
makes sense:
..
WHERE l2.start BETWEEN l1.start AND l1.end
..
UNION
..
WHERE l1.start BETWEEN l2.start AND l2.end
..
The first clause being equivalent to
AND l1.start <= l2.end
AND l1.end >= l2.start
AND l1.start <= l2.start
I don't know how you have to dea
in my example is the best method though.
Marc Mamin
SELECT
l1.id AS id1,
l2.id AS id2
FROM
location l1,
location l2
WHERE l1.objectid = 1
AND (l2.start BETWEEN l1.start AND l1.end
OR
l1.start BETWEEN l2.start AND l2.end
)
l1.start
AND l2.
Hello,
To improve performances, I would like to try moving the temp_tablespaces
locations outside of our RAID system.
Is it a good practice ?
Thanks,
Marc Mamin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http
Le Sunday 30 November 2008 19:45:11 tmp, vous avez écrit :
> I am struggeling with the following query which fetches a random subset
> of 200 questions that matches certain tags within certain languages.
> However, the query takes forever to evaluate, even though I have a
> "limit 200" appended. An
Hi,
Maybe you can try this syntax. I'm not sure, but it eventually perform better:
delete from company_alias USING comprm
where company_alias.company_id =comprm.id
Cheers,
Marc
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Someone on this list has one of those 'confirm your email' filters on their
mailbox, which is bouncing back messages ... this is an attempt to try and
narrow down the address that is causing this ...
- --
Marc G. FournierHub.O
Hi,
This occurs on postgresql 8.2.5.
I'm a bit at loss with the plan chosen for a query :
The query is this one :
SELECT SULY_SAOEN.SAOEN_ID, SULY_SDCEN.SDCEN_REF, SULY_SDCEN.SDCEN_LIB,
CSTD_UTI.UTI_NOM, CSTD_UTI.UTI_LIBC, SULY_SAOEN.SAOEN_DTDERNENVOI,
SULY_SDCEN.SDCEN_DTLIMAP, SULY_PF
large datasets and other applications running. In my experience,
shared_buffers are more important than work_mem.
Have you tried increasing default_statistic_targets (eg to 200 or more) and
after that
running "analyze" on your tables or the entire database?
Marc
Christian Rengst
pe to help,
Marc
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Walter
Mauritz
Sent: Tuesday, September 04, 2007 8:53 PM
To: pgsql-performance@postgresql.org
Subject: [PERFORM] join tables vs. denormalization by trigger
Hi,
I wonder about differenc
size of about 400 GB,
simulating 3 different customers, also with data quite equally splitted
in 3 scheemas.
I will post our configuration(s) later on.
Thanks again for all your valuable input.
Marc Mamin
---(end of broadcast)---
TIP 5: don'
4GB
RAM, 4 cpus) and the benchmark server; one of the target of this
benchmark is to verify the scalability of our application.
And you have no reason to be envious as the server doesn't belong us :-)
Thanks for your comments,
Marc Mamin
Posgres version: 8.2.1
Server
Le Wednesday 11 July 2007 22:35:31 Tom Lane, vous avez écrit :
> Marc Cousin <[EMAIL PROTECTED]> writes:
> > Nevertheless, shouldn't the third estimate be smaller or equal to the sum
> > of the two others ?
>
> The planner's estimation for subplan conditi
Hi,
I'm having a weird problem on a query :
I've simplified it to get the significant part (see end of message).
The point is I've got a simple
SELECT field FROM table WHERE 'condition1'
Estimated returned rows : 5453
Then
SELECT field FROM table WHERE 'condition2'
Estimated returned rows : 705
Th
t may insert a new raw, the returned id is invariant for
a given user
(I don't really understand the holdability ov immutable functions;
are the results cached only for the livetime of a prepared statement ?,
or can they be shared by different sessions ?)
Thanks,
Marc
--Table
plete configuration below)
- has anybody built a similar workflow ?
- could this be a feature request to extend the capabilities of copy
from ?
Thanks for your time and attention,
Marc Mamin
where
eventmain.incidentid = keyword_incidents.incidentid
and eventgeo.incidentid = keyword_incidents.incidentid
and ( recordtext like '%JOSE CHAVEZ%' )
)foo
where eventactivity.incidentid = foo.incidentid
order by foo.entrydate limit 10000;
HTH,
Marc
many "delete" with "drop table" statements,
whis is probably the main advantage of the solution.
The biggest issue was the implementation time ;-) but I'm really happy
with the resulting performances.
HTH,
Marc
-Original Message-
From: [EMAIL PROTECTED]
[ma
Hi...
Bacula does no transaction right now, so every insert is done separately with
autocommit.
Moreover, the insert loop for the main table is done by several individual
queries to insert data in several tables (filename, dir, then file), so this
is slow.
There's work underway to speed that up,
1 - 100 of 155 matches
Mail list logo