On Tue, 29 Jun 2004 17:55:37 +0200, [EMAIL PROTECTED]
<[EMAIL PROTECTED]> wrote:
> Performance issue, I'm experiencing here, is somewhat
> weird - server gets high average load (from 5 up to 15,
> 8 on average). Standard performance monitoring
> utilities (like top) show that CPUs are not loaded
>
ing
content.
BTW. If you are a SQL/python programmer in (or near) Lanarkshire,
Scotland, we have a vacancy. ;-)
Cheers
Marc
--
Marc Burgauer
Sharedbase Ltd
http://www.sharedbase.com
Creating and supporting on-line communities
---(end of broadcast)--
n Postgres;
currently, I only use btree indexes. Could I gain performances whole using
hash indexes as well ?
How does Postgres handle concurrent copy from on: same table / different
tables ?
I'd be glad on any further suggestion on how to further increase my
performances.
Marc
--
+++ L
UTE (sql) LOOP
return NEXT result;
end loop;
return ;
END;
$$ LANGUAGE plpgsql;
Note: in your case the function shoud have a further parameter to join
largetable(n) to smalltable in the "sub queries"
HTH,
Marc
> I've got a query that I think the query optimi
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
But INT2, INT4, INT8 and "SERIAL" are considered to be a unique datatype.
Am I Right?
Thanks,
Marc
--
Geschenkt: 3 Monate GMX ProMail gratis + 3 Ausgab
The Query plan in PGAdmin is about 100KB big with 800 lines :-)
The performance is not such bad, but I'm wondering if there are some
POSTGRES limitations I should take care of with this strategy.
Thanks,
Marc
--
Geschenkt: 3 Monate GMX ProMail gratis + 3 Ausgaben stern gratis
++ Jetzt an
ted | timestamp without time zone | not null
created_userid | character varying(255) | not null
updated | timestamp without time zone | not null
updated_userid | character varying(255) | not null
published | timestamp without time zone |
published_userid | character varying(255) |
status_chan
Copy.
hth,
Marc Mamin
From:
[EMAIL PROTECTED] on behalf of Oliver
CrosbySent: Wed 7/20/2005 3:50 AMTo: PFCCc:
Sven Willenberger; Dawid Kuroczko; Kevin Grittner; [EMAIL PROTECTED];
pgsql-performance@postgresql.orgSubject: Re: [PERFORM] Looking for
tips
Sorry for the lack of specifics...We
Hi,
I usually use PostgreSQL coupled with Linux, but I have to use Windows for a
perticular project.
So I wanted to do some tests to know if the performance will be acceptable (I
don't need PostgreSQL to be as fast with windows as with linux, but it has to
be usable...).
I started with tryin
>
> In my experience win32 is par with linux generally with a few gotchas on
> either side. Are your times with fsync=no? It's much harder to give
> apples-apples comparison with fsync=on for various reasons.
It is with fsync=off on windows, fsync=on on linux
>
> Are you running stats_command_str
t transaction the insert time
> grows with #recs inserted. Time to insert first 50k recs is about 27
> sec and last 50 k recs is 77 sec. I also confimed that size of table is
> not playing a role here.
>
> Marc, can you do select timeofday() every 50k recs from linux? Also a
&g
y
> >
> > about 2
> >
> > > times slower than inserting locally (the linux client had a
> >
> > slower CPU
> >
> > > 1700Mhz agains 3000).
> > > Could it be related to a problem in the windows psql client ?
> >
> > [OK, I'm brin
I have run into this type of query problem as well. I solved it in my
application by the following type of query.
SELECT tlid
FROM completechain AS o
WHERE not exists (
SELECT 1
FROM completechain
WHERE tlid=o.tlid and ogc_fid!=o.ogc_fid
);
Assumes of course that you hav
We have a large DB
with partitioned tables in postgres. We have had trouble with a
ORDER/LIMIT type query. The order and limit are not pushed down to the
sub-tables
CREATE TABLE base
(
foo int
);
CREATE TABLE
bar_0
extra int
) INHERITS
(base);
ALTER TABLE bar ADD
PRIMA
Hi,
I've been working on trying to partition a big table (I've never partitioned a
table in any other database till now).
Everything went ok, except one query that didn't work afterwards.
I've put the partition description, indexes, etc ..., and the explain plan
attached.
The query is extremel
;
> On 12/13/05, Pandurangan R S <[EMAIL PROTECTED]> wrote:
> > Did you set constraint_exclusion = true in postgresql.conf file?
> >
> > On 12/13/05, Marc Cousin <[EMAIL PROTECTED]> wrote:
> > > Hi,
> > >
> > > I've been workin
many "delete" with "drop table" statements,
whis is probably the main advantage of the solution.
The biggest issue was the implementation time ;-) but I'm really happy
with the resulting performances.
HTH,
Marc
-Original Message-
From: [EMAIL PROTECTED]
[ma
where
eventmain.incidentid = keyword_incidents.incidentid
and eventgeo.incidentid = keyword_incidents.incidentid
and ( recordtext like '%JOSE CHAVEZ%' )
)foo
where eventactivity.incidentid = foo.incidentid
order by foo.entrydate limit 10000;
HTH,
Marc
plete configuration below)
- has anybody built a similar workflow ?
- could this be a feature request to extend the capabilities of copy
from ?
Thanks for your time and attention,
Marc Mamin
t may insert a new raw, the returned id is invariant for
a given user
(I don't really understand the holdability ov immutable functions;
are the results cached only for the livetime of a prepared statement ?,
or can they be shared by different sessions ?)
Thanks,
Marc
--Table
Hi,
I'm having a weird problem on a query :
I've simplified it to get the significant part (see end of message).
The point is I've got a simple
SELECT field FROM table WHERE 'condition1'
Estimated returned rows : 5453
Then
SELECT field FROM table WHERE 'condition2'
Estimated returned rows : 705
Th
Le Wednesday 11 July 2007 22:35:31 Tom Lane, vous avez écrit :
> Marc Cousin <[EMAIL PROTECTED]> writes:
> > Nevertheless, shouldn't the third estimate be smaller or equal to the sum
> > of the two others ?
>
> The planner's estimation for subplan conditi
4GB
RAM, 4 cpus) and the benchmark server; one of the target of this
benchmark is to verify the scalability of our application.
And you have no reason to be envious as the server doesn't belong us :-)
Thanks for your comments,
Marc Mamin
Posgres version: 8.2.1
Server
size of about 400 GB,
simulating 3 different customers, also with data quite equally splitted
in 3 scheemas.
I will post our configuration(s) later on.
Thanks again for all your valuable input.
Marc Mamin
---(end of broadcast)---
TIP 5: don'
pe to help,
Marc
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Walter
Mauritz
Sent: Tuesday, September 04, 2007 8:53 PM
To: pgsql-performance@postgresql.org
Subject: [PERFORM] join tables vs. denormalization by trigger
Hi,
I wonder about differenc
or
software RAID).
Here is the trivial test :
The configuration is the default configuration, just after initdb
CREATE TABLE test (a int);
CREATE INDEX idxtest on test (a);
with wal_sync_method = open_datasync (new default)
marc=# INSERT INTO test SELECT generate_series(1,10);
The Friday 04 June 2010 15:59:05, Tom Lane wrote :
> Marc Cousin writes:
> > I hope I'm not going to expose an already known problem, but I couldn't
> > find it mailing list archives (I only found
> > http://archives.postgresql.org/pgsql- hackers/2009-12/msg01543
naive, but why can't posgres use multiple
threads for large sort operation ?
best regards,
Marc Mamin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
No, CONCURRENTLY is to improve table availability during index creation, but it
degrades the performances.
best regards,
Marc Mamin
-Original Message-
From: Alex Hunsaker [mailto:bada...@gmail.com]
Sent: Donnerstag, 11. November 2010 19:55
To: Marc Mamin
Cc: pgsql-performance
formula on the fly.
best regards,
Marc Mamin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ithin the given transaction.
regards,
Marc Mamin
-Ursprüngliche Nachricht-
Von: pgsql-performance-ow...@postgresql.org im Auftrag von Shrirang Chitnis
Gesendet: Mi 12/8/2010 8:05
An: Bryce Nesbitt; pgsql-performance@postgresql.org
Betreff: Re: [PERFORM] hashed subplan 5000x slower tha
Another point: would a conditionl index help ?
on articles (context_key) where indexed
regards,
-Ursprüngliche Nachricht-
Von: pgsql-performance-ow...@postgresql.org im Auftrag von Marc Mamin
Gesendet: Mi 12/8/2010 9:06
An: Shrirang Chitnis; Bryce Nesbitt; pgsql-performance
Hello,
UNION will remove all duplicates, so that the result additionally
requires to be sorted.
Anyway, for performance issues, you should always start investigation
with explain analyze .
regards,
Marc Mamin
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow
fast in 8.4.
Best regards,
Marc
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Major-performance-problem-after-upgrade-from-8-3-to-8-4-tp2796390p3329435.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.
--
Sent via pgsql-performance
The Monday 28 February 2011 13:57:45, Heikki Linnakangas wrote :
> On 28.02.2011 11:38, Marc Cousin wrote:
> > I've been facing a very large (more than 15 seconds) planning time in a
> > partitioned configuration. The amount of partitions wasn't completely
> >
The Monday 28 February 2011 16:35:37, Tom Lane wrote :
> Marc Cousin writes:
> > The Monday 28 February 2011 13:57:45, Heikki Linnakangas wrote :
> >> Testing here with a table with 1000 columns and 100 partitions, about
> >> 80% of the planning time is looking up
Le mardi 01 mars 2011 07:20:19, Tom Lane a écrit :
> Marc Cousin writes:
> > The Monday 28 February 2011 16:35:37, Tom Lane wrote :
> >> Could we see a concrete example demonstrating that? I agree with Heikki
> >> that it's not obvious what you are testing that
The Tuesday 01 March 2011 16:33:51, Tom Lane wrote :
> Marc Cousin writes:
> > Le mardi 01 mars 2011 07:20:19, Tom Lane a écrit :
> >> It's worth pointing out that the only reason this effect is dominating
> >> the runtime is that you don't have any statistic
owing
periodically
select change_rule(1);
We've looked at the code and the rule changes appear "easy" but we are
concerned about the required changes for truncate.
Thanks
Marc
---(end of broadcast)---
TIP 6: explain analyze is your friend
m: Tom Lane [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, February 01, 2006 12:50 AM
> To: Marc Morin
> Cc: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] partitioning and locking problems
>
> "Marc Morin" <[EMAIL PROTECTED]> writes:
> > Would l
, 2006 7:44 AM
> To: Marc Morin
> Cc: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] partitioning and locking problems
>
> Hi, Marc,
>
> Marc Morin wrote:
>
> > 1- long running report is running on view
> > 2- continuous inserters into v
All good ideas, unfortunately, we can't change the inserting applicatin
code easily.
> -Original Message-
> From: Simon Riggs [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, February 07, 2006 5:09 PM
> To: Marc Morin
> Cc: Markus Schaber; pgsql-performance@postgres
sioning,
shared_buffers and checkpoint_segments are interconnected in weird and
wonderful ways... Seldom have found "simple" solutions to performance
problems.
Marc
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of
> Aaron Turne
uld be nice if postgres could understand somehow that
some columns are "dynamic" and that it's histogram could be stretched to
the maximal values or some other technique for estimating rows to the
right of the range of values in the histogram...
Or have so
ailto:[EMAIL PROTECTED]
> Sent: Friday, March 10, 2006 1:31 PM
> To: Marc Morin
> Cc: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] Trouble managing planner for
> timestamptz columns
>
> "Marc Morin" <[EMAIL PROTECTED]> writes:
> > We tend
Wondering if
Update firma1.rid set toode=null where toode is not null and not
exists(select 1 from firma1.toode where toode=rid.toode);
Would be faster... Problem appears to be the seqscan of seqscan... No?
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On B
Explain analyze would be nice ;-)
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Andrus
> Sent: Tuesday, April 04, 2006 3:37 PM
> To: pgsql-performance@postgresql.org
> Subject: [PERFORM] Query runs too long for indexed tables
>
> I have relative
I hope I'm not going to say stupid things, but here's what i know (or i think
i know :) ) about bacula + postgresql
If I remember correctly (I allready discussed this with Kern Sibbald a while
ago), bacula does each insert in its own transaction : that's how the program
is done, and of course i
I'm having a problem with a simple query, that finds children of a node,
using a materialized path to the node. The query:
select n1.id
from nodes n1, nodes n2
where n1.path like n2.path || '%'
and n2.id = 14;
QUERY
PLAN
Thanks Tom,
Is that documented somewhere? I can't seem to see any mention of it in
the docs.
Tom Lane wrote:
Marc McIntyre <[EMAIL PROTECTED]> writes:
... Is there a way to perform this efficiently in one query ?
No, because you're hoping for an indexscan optimi
hat
the i/o looks more like write full blocks
Thanks
Marc
> Would it be possible to change the primary key to
> (logtime,key)? This could help keeping the "working window" small.
No, the application accessing the data wants all the rows between start
and end time for a particular key value.
>
> Secondly, the real working set is smaller, as the rows
>
> The bottom line here is likely to be "you need more RAM" :-(
Yup. Just trying to get a handle on what I can do if I need more than
16G
Of ram... That's as much as I can put on the installed based of
servers 100s of them.
>
> I wonder whether there is a way to use table partitioning t
d to drill down
So, I'd like my cake and eat it too... :-)
I'd like to have my indexes built as rows are inserted into the
partition so help with the drill down...
> -Original Message-
> From: Bucky Jordan [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, September 26,
Hi...
Bacula does no transaction right now, so every insert is done separately with
autocommit.
Moreover, the insert loop for the main table is done by several individual
queries to insert data in several tables (filename, dir, then file), so this
is slow.
There's work underway to speed that up,
Hi. I hope I'm not asking a too trivial question here...
I'm having trouble with a (quite big) query, and can't find a way to make it
faster.
Here is the information :
Tables :
sces_vte -> 2753539 rows
sces_art -> 602327
sces_fsf -> 8126
sces_frc -> 7763
sces_tps ->
ember 2004 16:40, you wrote:
> Marc Cousin <[EMAIL PROTECTED]> writes:
> > I'm having trouble with a (quite big) query, and can't find a way to make it
> > faster.
>
> Seems like it might help if the thing could use a HashAggregate instead
> of sort/gro
On Wednesday 08 September 2004 16:56, you wrote:
> Marc Cousin <[EMAIL PROTECTED]> writes:
> > The query has been generated by business objects ... i'ill try to suggest to the
> > developpers to remove this constant (if they can)...
> > The fields used by the
On Fri, 15 Oct 2004 11:54:44 -0500, [EMAIL PROTECTED]
<[EMAIL PROTECTED]> wrote:
> My basic question to the community is "is PostgreSQL approximately as fast
> as Oracle?"
>
> I don't want benchmarks, they're BS. I want a gut feel from this community
> because I know many of you are in mixed shop
large datasets and other applications running. In my experience,
shared_buffers are more important than work_mem.
Have you tried increasing default_statistic_targets (eg to 200 or more) and
after that
running "analyze" on your tables or the entire database?
Marc
Christian Rengst
Hi,
This occurs on postgresql 8.2.5.
I'm a bit at loss with the plan chosen for a query :
The query is this one :
SELECT SULY_SAOEN.SAOEN_ID, SULY_SDCEN.SDCEN_REF, SULY_SDCEN.SDCEN_LIB,
CSTD_UTI.UTI_NOM, CSTD_UTI.UTI_LIBC, SULY_SAOEN.SAOEN_DTDERNENVOI,
SULY_SDCEN.SDCEN_DTLIMAP, SULY_PF
Hi,
Maybe you can try this syntax. I'm not sure, but it eventually perform better:
delete from company_alias USING comprm
where company_alias.company_id =comprm.id
Cheers,
Marc
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to
Le Sunday 30 November 2008 19:45:11 tmp, vous avez écrit :
> I am struggeling with the following query which fetches a random subset
> of 200 questions that matches certain tags within certain languages.
> However, the query takes forever to evaluate, even though I have a
> "limit 200" appended. An
Hello,
To improve performances, I would like to try moving the temp_tablespaces
locations outside of our RAID system.
Is it a good practice ?
Thanks,
Marc Mamin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http
in my example is the best method though.
Marc Mamin
SELECT
l1.id AS id1,
l2.id AS id2
FROM
location l1,
location l2
WHERE l1.objectid = 1
AND (l2.start BETWEEN l1.start AND l1.end
OR
l1.start BETWEEN l2.start AND l2.end
)
l1.start
AND l2.
makes sense:
..
WHERE l2.start BETWEEN l1.start AND l1.end
..
UNION
..
WHERE l1.start BETWEEN l2.start AND l2.end
..
The first clause being equivalent to
AND l1.start <= l2.end
AND l1.end >= l2.start
AND l1.start <= l2.start
I don't know how you have to dea
Hello Matthew,
Another idea:
Are your objects limited to some smaller ranges of your whole interval ?
If yes you may possibly reduce the ranges to search for while using an
additional table with the min(start) max(end) of each object...
Marc Mamin
It's not that trivial with Oracle either. I guess you had to use shared
servers to get to that amount of sessions. They're most of the time not
activated by default (dispatchers is at 0).
Granted, they are part of the 'main' product, so you just have to set up
dispatchers, shared servers, circu
with bacula ...
effective_cache_size = 800MB
default_statistics_target = 1000
PostgreSQL is 8.3.5 on Debian Lenny
I'm sorry for this very long email, I tried to be as precise as I could, but
don't hesitate to ask me more.
Thanks for helping.
Marc Cousin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
We regularly do all of dbcheck. This is our real configuration, there are
really lots of servers and lots of files (500 million files backed up every
month).
But thanks for mentionning that.
The thing is we're trying to improve bacula with postgresql in order to make
it able to bear with this
>
> While this is not your questions, I still noticed you seem to be on 8.3 -
> it might be a bit faster to use GROUP BY instead of DISTINCT.
It didn't do a big difference, I already tried that before for this query.
Anyway, as you said, it's not the query having problems :)
> Your effective_cac
Le Tuesday 14 July 2009 10:15:21, vous avez écrit :
> Marc Cousin wrote:
> >> Your effective_cache_size is really small for the system you seem to
> >> have - its the size of IO caching your os is doing and uses no resources
> >> itself. And 800MB of that on a system
Le Tuesday 14 July 2009 10:23:25, Richard Huxton a écrit :
> Marc Cousin wrote:
> > Temporarily I moved the problem at a bit higher sizes of batch by
> > changing random_page_cost to 0.02 and seq_page_cost to 0.01, but I feel
> > like an apprentice sorcerer with this, as I
ar work.
There are other things I am thinking of : maybe it would be better to have sort
space on another (and not DBRD'ded) raid set ? we have a quite
cheap setup right now for the database, and I think maybe this would help scale
better. I can get a filesystem in another volume group, whi
Le Wednesday 15 July 2009 15:45:01, Alvaro Herrera a écrit :
> Marc Cousin escribió:
> > There are other things I am thinking of : maybe it would be better to
> > have sort space on another (and not DBRD'ded) raid set ? we have a quite
> > cheap setup right now for the d
Le Thursday 16 July 2009 01:56:37, Devin Ben-Hur a écrit :
> Marc Cousin wrote:
> > This mail contains the asked plans :
> > Plan 1
> > around 1 million records to insert, seq_page_cost 1, random_page_cost 4
> >
> > -> Hash (cost=425486.72..425486.72
On Thursday 16 July 2009 07:20:18 Marc Cousin wrote:
> Le Thursday 16 July 2009 01:56:37, Devin Ben-Hur a écrit :
> > Marc Cousin wrote:
> > > This mail contains the asked plans :
> > > Plan 1
> > > around 1 million records to insert,
Le Thursday 16 July 2009 22:07:25, Kevin Grittner a écrit :
> Marc Cousin wrote:
> > the hot parts of these 2 tables are extremely likely to be in the
> > database or linux cache (buffer hit rate was 97% in the example
> > provided). Moreover, the first two queries of
Le Thursday 16 July 2009 23:54:54, Kevin Grittner a écrit :
> Marc Cousin wrote:
> > to sum it up, should I keep these values (I hate doing this :) ) ?
>
> Many people need to set the random_page_cost and/or seq_page_cost to
> reflect the overall affect of caching on the act
> It really has very little impact. It only affects index scans, and
> even then only if effective_cache_size is less than the size of the
> table.
>
> Essentially, when this kicks in, it models the effect that if you are
> index scanning a table much larger than the size of your cache, you
> migh
The few 'obvious' things I see :
ID and POLLID aren't of the same type (numeric vs bigint)
TTIME isn't indexed.
And as a general matter, you should stick to native datatypes if you don't
need numeric.
But as said in the other answer, maybe you should redo this schema and use
more consistent d
Hello,
I didn't try it, but following should be slightly faster:
COUNT( CASE WHEN field >= x AND field < y THEN true END)
intead of
SUM( CASE WHEN field >= x AND field < y THEN 1 ELSE 0 END)
HTH,
Marc Mamin
From: pgsql-performance-ow.
CT p.page_id
FROM mediawiki.page p
JOIN mediawiki.revision r on (p.page_id=r.rev_page)
JOIN mediawiki.pagecontent ss on (r.rev_id=ss.old_id)
WHERE (ss.textvector @@ (to_tsquery('fotbal')))
HTH,
Marc Mamin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
dat);
exception
when unique_violation then
update t set dat = a_dat where id = a_id and dat <> a_dat;
return 0;
end;
elsif not test then
update t set dat = a_dat where id = a_id;
return 0;
end if;
return 1;
best regards,
Marc Mamin
-Ursp
#x27;cdr'::text)
> -> Bitmap Index Scan on tevent_cdr_timestamp
> (cost=0.00..57.31 rows=2477 width=0) (actual time=0.404..0.404
> rows=2480 loops=1)
> Index Cond: (("timestamp" >= '2011-09-09
> 22:00:00+10'::timestamp with
Le Tue, 27 Sep 2011 12:45:00 +1000,
anthony.ship...@symstream.com a écrit :
> On Monday 26 September 2011 19:39, Marc Cousin wrote:
> > Because Index Scans are sorted, not Bitmap Index Scans, which
> > builds a list of pages to visit, to be then visited by the Bitmap
>
Le Tue, 27 Sep 2011 19:05:09 +1000,
anthony.ship...@symstream.com a écrit :
> On Tuesday 27 September 2011 18:54, Marc Cousin wrote:
> > The thing is, the optimizer doesn't know if your data will be in
> > cache when you will run your query… if you are sure most of your
>
ACH version.
CREATE OR REPLACE FUNCTION input_value_un (in_inputs numeric[], in_input_nr
numeric)
RETURNS numeric AS
$BODY$
SELECT u[1][2]
FROM unnest($1, SLICE =1) u
WHERE u[1][1]=in_input_nr
LIMIT 1;
$BODY$
LANGUAGE sql IMMUTABLE;
best regards,
Marc Ma
USD 12.0
=>
x overview
- ----
a {(EUR,20.0), (CHF,7.5)}
b {(USD,10.0)}
regards,
Marc Mamin
> On 12/14/2011 11:21 AM, Marc Mamin wrote:
> > Hello,
> >
> > For such cases (see below), it would be nice to have an unnest
> function that on
t
only worth doing this for a large number of inserted/updated elements?
What if the number of inserts/updates is only a dozen at a time for a
large table (>10M entries)?
Thanks,
Marc
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 6 January 2012 20:38, Samuel Gendler wrote:
> On Fri, Jan 6, 2012 at 12:22 PM, Marc Eberhard
> wrote:
>> On 6 January 2012 20:02, Samuel Gendler wrote:
>> > Have you considered doing the insert by doing a bulk insert into a temp
>> > table and then pulling rows
imp(t_value,t_record,output_id) where t_imp.id is
not null.
regards,
Marc Mamin
-Ursprüngliche Nachricht-
Von: pgsql-performance-ow...@postgresql.org im Auftrag von Jochen Erwied
Gesendet: Sa 1/7/2012 12:57
An: anto...@inaps.org
Cc: pgsql-performance@postgresql.org
Betreff: Re: [PERF
he main
table with a short one line SQL delete statement before the
interpolation and merge.
> Tada.
:-
> Enjoy !
I certainly will. Many thanks for those great lines of SQL!
Hope you recover from your flu quickly!
All the best,
Marc
--
Sent via pgsql-performance mailing list (pgsql-
l fix that model, but am first looking for a quick way to restore
performance on our production servers.
best regards,
Marc Mamin
sce(toplevelrid,msoffset::varchar);
without stats: http://explain.depesz.com/s/qPg
with stats: http://explain.depesz.com/s/88q
aserr_20120125_tvi: GIN Index on my_func(.,.,.,.,.,.)
best regards,
Marc Mamin
> -Original Message-
> From: pgsql-performance-ow...@postgre
position.
So I repeated the test with an additional search term at the last
position, but without significant change:
(result from the 6. test below)
without analyze: http://explain.depesz.com/s/6At
with analyze:http://explain.depesz.com/s/r3B
best regards,
Marc Mamin
Here all my results
> Von: Robert Haas [mailto:robertmh...@gmail.com]
> Gesendet: Mi 2/29/2012 7:32
>
> On Mon, Feb 6, 2012 at 6:05 AM, Marc Mamin wrote:
> > without analyze: http://explain.depesz.com/s/6At
> > with analyze:http://explain.depesz.com/s/r3B
...
> The problem seems to
t
(cost=0.00..20944125.72 rows=1031020672
width=8)
I would expect this to run half an hour or so, completely overloading the
server...
Any Ideas?
Kind regards
Marc
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Thanks for pointing me to that article. I totally forgot that the postgres wiki
existed.
Updating is not an option at the moment, but we'll probably do so in the
future. Until then I can
live with the workaround.
Kind regards,
Marc
--
Sent via pgsql-performance mailing list (
ble t2 ( _array int[]);
alter table t2 alter _array set storage external;
insert into t2 SELECT ARRAY(SELECT * FROM generate_series(1,500));
explain analyze SELECT _array[1] FROM t1;
Total runtime: 0.125 ms
explain analyze SELECT _array[1] FROM t2;
Total runtime: 8.649 ms
best regards,
M
1 - 100 of 155 matches
Mail list logo