Re: [PERFORM] High load average with PostgreSQL 7.4.2 on debian/ibm eserver.

2004-06-29 Thread Marc
On Tue, 29 Jun 2004 17:55:37 +0200, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: > Performance issue, I'm experiencing here, is somewhat > weird - server gets high average load (from 5 up to 15, > 8 on average). Standard performance monitoring > utilities (like top) show that CPUs are not loaded >

[PERFORM] Reading recommendations

2005-03-30 Thread Marc Burgauer
ing content. BTW. If you are a SQL/python programmer in (or near) Lanarkshire, Scotland, we have a vacancy. ;-) Cheers Marc -- Marc Burgauer Sharedbase Ltd http://www.sharedbase.com Creating and supporting on-line communities ---(end of broadcast)--

[PERFORM] tuning Postgres for large data import (using Copy from)

2005-05-12 Thread Marc Mamin
n Postgres; currently, I only use btree indexes. Could I gain performances whole using hash indexes as well ? How does Postgres handle concurrent copy from on: same table / different tables ? I'd be glad on any further suggestion on how to further increase my performances. Marc -- +++ L

Re: [PERFORM] Optimising queries involving unions

2005-05-27 Thread Marc Mamin
UTE (sql) LOOP return NEXT result; end loop; return ; END; $$ LANGUAGE plpgsql; Note: in your case the function shoud have a further parameter to join largetable(n) to smalltable in the "sub queries" HTH, Marc > I've got a query that I think the query optimi

[PERFORM] TIP 9: the planner will ignore... & datatypes

2005-06-01 Thread Marc Mamin
TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match But INT2, INT4, INT8 and "SERIAL" are considered to be a unique datatype. Am I Right? Thanks, Marc -- Geschenkt: 3 Monate GMX ProMail gratis + 3 Ausgab

[PERFORM] Query limitations (size, number of UNIONs ...)

2005-06-02 Thread Marc Mamin
The Query plan in PGAdmin is about 100KB big with 800 lines :-) The performance is not such bad, but I'm wondering if there are some POSTGRES limitations I should take care of with this strategy. Thanks, Marc -- Geschenkt: 3 Monate GMX ProMail gratis + 3 Ausgaben stern gratis ++ Jetzt an

[PERFORM] Slow Query

2005-07-13 Thread Marc McIntyre
ted | timestamp without time zone | not null created_userid | character varying(255) | not null updated | timestamp without time zone | not null updated_userid | character varying(255) | not null published | timestamp without time zone | published_userid | character varying(255) | status_chan

Re: [PERFORM] Looking for tips

2005-07-23 Thread Marc Mamin
Copy.   hth,   Marc Mamin From: [EMAIL PROTECTED] on behalf of Oliver CrosbySent: Wed 7/20/2005 3:50 AMTo: PFCCc: Sven Willenberger; Dawid Kuroczko; Kevin Grittner; [EMAIL PROTECTED]; pgsql-performance@postgresql.orgSubject: Re: [PERFORM] Looking for tips Sorry for the lack of specifics...We

[PERFORM] insert performance for win32

2005-09-06 Thread Marc Cousin
Hi, I usually use PostgreSQL coupled with Linux, but I have to use Windows for a perticular project. So I wanted to do some tests to know if the performance will be acceptable (I don't need PostgreSQL to be as fast with windows as with linux, but it has to be usable...). I started with tryin

Re: [PERFORM] insert performance for win32

2005-09-06 Thread Marc Cousin
> > In my experience win32 is par with linux generally with a few gotchas on > either side.  Are your times with fsync=no? It's much harder to give > apples-apples comparison with fsync=on for various reasons. It is with fsync=off on windows, fsync=on on linux > > Are you running stats_command_str

Re: [PERFORM] insert performance for win32

2005-09-07 Thread Marc Cousin
t transaction the insert time > grows with #recs inserted. Time to insert first 50k recs is about 27 > sec and last 50 k recs is 77 sec. I also confimed that size of table is > not playing a role here. > > Marc, can you do select timeofday() every 50k recs from linux? Also a &g

Re: [PERFORM] insert performance for win32

2005-11-02 Thread Marc Cousin
y > > > > about 2 > > > > > times slower than inserting locally (the linux client had a > > > > slower CPU > > > > > 1700Mhz agains 3000). > > > Could it be related to a problem in the windows psql client ? > > > > [OK, I'm brin

Re: [PERFORM] Sort performance on large tables

2005-11-08 Thread Marc Morin
I have run into this type of query problem as well. I solved it in my application by the following type of query. SELECT tlid FROM completechain AS o WHERE not exists ( SELECT 1 FROM completechain WHERE tlid=o.tlid and ogc_fid!=o.ogc_fid ); Assumes of course that you hav

[PERFORM] sort/limit across union all

2005-11-18 Thread Marc Morin
We have a large DB with partitioned tables in postgres.   We have had trouble with a ORDER/LIMIT type query.  The order and limit are not pushed down to the sub-tables   CREATE TABLE base (     foo int );   CREATE TABLE bar_0     extra int ) INHERITS (base); ALTER TABLE bar ADD PRIMA

[PERFORM] partitioning

2005-12-13 Thread Marc Cousin
Hi, I've been working on trying to partition a big table (I've never partitioned a table in any other database till now). Everything went ok, except one query that didn't work afterwards. I've put the partition description, indexes, etc ..., and the explain plan attached. The query is extremel

Re: [PERFORM] partitioning

2005-12-13 Thread Marc Cousin
; > On 12/13/05, Pandurangan R S <[EMAIL PROTECTED]> wrote: > > Did you set constraint_exclusion = true in postgresql.conf file? > > > > On 12/13/05, Marc Cousin <[EMAIL PROTECTED]> wrote: > > > Hi, > > > > > > I've been workin

Re: [PERFORM] Scaling SELECT:s with the number of disks on a stripe

2007-03-29 Thread Marc Mamin
many "delete" with "drop table" statements, whis is probably the main advantage of the solution. The biggest issue was the implementation time ;-) but I'm really happy with the resulting performances. HTH, Marc -Original Message- From: [EMAIL PROTECTED] [ma

Re: [PERFORM] Planner doing seqscan before indexed join

2007-03-29 Thread Marc Mamin
where eventmain.incidentid = keyword_incidents.incidentid and eventgeo.incidentid = keyword_incidents.incidentid and ( recordtext like '%JOSE CHAVEZ%' ) )foo where eventactivity.incidentid = foo.incidentid order by foo.entrydate limit 10000; HTH, Marc

[PERFORM] copy from performance on large tables with indexes

2007-06-07 Thread Marc Mamin
plete configuration below) - has anybody built a similar workflow ? - could this be a feature request to extend the capabilities of copy from ? Thanks for your time and attention, Marc Mamin

[PERFORM] tuning a function to insert/retrieve values from a reference table

2007-07-10 Thread Marc Mamin
t may insert a new raw, the returned id is invariant for a given user (I don't really understand the holdability ov immutable functions; are the results cached only for the livetime of a prepared statement ?, or can they be shared by different sessions ?) Thanks, Marc --Table

[PERFORM] Weird row estimate

2007-07-11 Thread Marc Cousin
Hi, I'm having a weird problem on a query : I've simplified it to get the significant part (see end of message). The point is I've got a simple SELECT field FROM table WHERE 'condition1' Estimated returned rows : 5453 Then SELECT field FROM table WHERE 'condition2' Estimated returned rows : 705 Th

Re: [PERFORM] Weird row estimate

2007-07-12 Thread Marc Cousin
Le Wednesday 11 July 2007 22:35:31 Tom Lane, vous avez écrit : > Marc Cousin <[EMAIL PROTECTED]> writes: > > Nevertheless, shouldn't the third estimate be smaller or equal to the sum > > of the two others ? > > The planner's estimation for subplan conditi

[PERFORM] Postgres configuration for 64 CPUs, 128 GB RAM...

2007-07-17 Thread Marc Mamin
4GB RAM, 4 cpus) and the benchmark server; one of the target of this benchmark is to verify the scalability of our application. And you have no reason to be envious as the server doesn't belong us :-) Thanks for your comments, Marc Mamin Posgres version: 8.2.1 Server

Re: [PERFORM] Postgres configuration for 64 CPUs, 128 GB RAM...

2007-07-24 Thread Marc Mamin
size of about 400 GB, simulating 3 different customers, also with data quite equally splitted in 3 scheemas. I will post our configuration(s) later on. Thanks again for all your valuable input. Marc Mamin ---(end of broadcast)--- TIP 5: don'

Re: [PERFORM] join tables vs. denormalization by trigger

2007-09-04 Thread Marc Mamin
pe to help, Marc -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Walter Mauritz Sent: Tuesday, September 04, 2007 8:53 PM To: pgsql-performance@postgresql.org Subject: [PERFORM] join tables vs. denormalization by trigger Hi, I wonder about differenc

[PERFORM] performance regression with Linux 2.6.33 and glibc 2.12

2010-06-04 Thread Marc Cousin
or software RAID). Here is the trivial test : The configuration is the default configuration, just after initdb CREATE TABLE test (a int); CREATE INDEX idxtest on test (a); with wal_sync_method = open_datasync (new default) marc=# INSERT INTO test SELECT generate_series(1,10);

Re: [PERFORM] performance regression with Linux 2.6.33 and glibc 2.12

2010-06-04 Thread Marc Cousin
The Friday 04 June 2010 15:59:05, Tom Lane wrote : > Marc Cousin writes: > > I hope I'm not going to expose an already known problem, but I couldn't > > find it mailing list archives (I only found > > http://archives.postgresql.org/pgsql- hackers/2009-12/msg01543

[PERFORM] CREATE INDEX as bottleneck

2010-11-11 Thread Marc Mamin
naive, but why can't posgres use multiple threads for large sort operation ? best regards, Marc Mamin -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] CREATE INDEX as bottleneck

2010-11-11 Thread Marc Mamin
No, CONCURRENTLY is to improve table availability during index creation, but it degrades the performances. best regards, Marc Mamin -Original Message- From: Alex Hunsaker [mailto:bada...@gmail.com] Sent: Donnerstag, 11. November 2010 19:55 To: Marc Mamin Cc: pgsql-performance

Re: [PERFORM] anti-join chosen even when slower than old plan

2010-11-13 Thread Marc Mamin
formula on the fly. best regards, Marc Mamin -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] hashed subplan 5000x slower than two sequential operations

2010-12-08 Thread Marc Mamin
ithin the given transaction. regards, Marc Mamin -Ursprüngliche Nachricht- Von: pgsql-performance-ow...@postgresql.org im Auftrag von Shrirang Chitnis Gesendet: Mi 12/8/2010 8:05 An: Bryce Nesbitt; pgsql-performance@postgresql.org Betreff: Re: [PERFORM] hashed subplan 5000x slower tha

Re: [PERFORM] hashed subplan 5000x slower than two sequential operations

2010-12-08 Thread Marc Mamin
Another point: would a conditionl index help ? on articles (context_key) where indexed regards, -Ursprüngliche Nachricht- Von: pgsql-performance-ow...@postgresql.org im Auftrag von Marc Mamin Gesendet: Mi 12/8/2010 9:06 An: Shrirang Chitnis; Bryce Nesbitt; pgsql-performance

Re: [PERFORM] UNION and bad performance

2010-12-11 Thread Marc Mamin
Hello, UNION will remove all duplicates, so that the result additionally requires to be sorted. Anyway, for performance issues, you should always start investigation with explain analyze . regards, Marc Mamin From: pgsql-performance-ow...@postgresql.org [mailto:pgsql-performance-ow

Re: [PERFORM] Major performance problem after upgrade from 8.3 to 8.4

2011-01-07 Thread Marc Antonio
fast in 8.4. Best regards, Marc -- View this message in context: http://postgresql.1045698.n5.nabble.com/Major-performance-problem-after-upgrade-from-8-3-to-8-4-tp2796390p3329435.html Sent from the PostgreSQL - performance mailing list archive at Nabble.com. -- Sent via pgsql-performance

Re: [PERFORM] inheritance: planning time vs children number vs column number

2011-02-28 Thread Marc Cousin
The Monday 28 February 2011 13:57:45, Heikki Linnakangas wrote : > On 28.02.2011 11:38, Marc Cousin wrote: > > I've been facing a very large (more than 15 seconds) planning time in a > > partitioned configuration. The amount of partitions wasn't completely > >

Re: [PERFORM] inheritance: planning time vs children number vs column number

2011-02-28 Thread Marc Cousin
The Monday 28 February 2011 16:35:37, Tom Lane wrote : > Marc Cousin writes: > > The Monday 28 February 2011 13:57:45, Heikki Linnakangas wrote : > >> Testing here with a table with 1000 columns and 100 partitions, about > >> 80% of the planning time is looking up

Re: [PERFORM] inheritance: planning time vs children number vs column number

2011-02-28 Thread Marc Cousin
Le mardi 01 mars 2011 07:20:19, Tom Lane a écrit : > Marc Cousin writes: > > The Monday 28 February 2011 16:35:37, Tom Lane wrote : > >> Could we see a concrete example demonstrating that? I agree with Heikki > >> that it's not obvious what you are testing that

Re: [PERFORM] inheritance: planning time vs children number vs column number

2011-03-01 Thread Marc Cousin
The Tuesday 01 March 2011 16:33:51, Tom Lane wrote : > Marc Cousin writes: > > Le mardi 01 mars 2011 07:20:19, Tom Lane a écrit : > >> It's worth pointing out that the only reason this effect is dominating > >> the runtime is that you don't have any statistic

[PERFORM] partitioning and locking problems

2006-01-31 Thread Marc Morin
owing periodically select change_rule(1); We've looked at the code and the rule changes appear "easy" but we are concerned about the required changes for truncate. Thanks Marc ---(end of broadcast)--- TIP 6: explain analyze is your friend

Re: [PERFORM] partitioning and locking problems

2006-02-01 Thread Marc Morin
m: Tom Lane [mailto:[EMAIL PROTECTED] > Sent: Wednesday, February 01, 2006 12:50 AM > To: Marc Morin > Cc: pgsql-performance@postgresql.org > Subject: Re: [PERFORM] partitioning and locking problems > > "Marc Morin" <[EMAIL PROTECTED]> writes: > > Would l

Re: [PERFORM] partitioning and locking problems

2006-02-02 Thread Marc Morin
, 2006 7:44 AM > To: Marc Morin > Cc: pgsql-performance@postgresql.org > Subject: Re: [PERFORM] partitioning and locking problems > > Hi, Marc, > > Marc Morin wrote: > > > 1- long running report is running on view > > 2- continuous inserters into v

Re: [PERFORM] partitioning and locking problems

2006-02-07 Thread Marc Morin
All good ideas, unfortunately, we can't change the inserting applicatin code easily. > -Original Message- > From: Simon Riggs [mailto:[EMAIL PROTECTED] > Sent: Tuesday, February 07, 2006 5:09 PM > To: Marc Morin > Cc: Markus Schaber; pgsql-performance@postgres

Re: [PERFORM] 10+hrs vs 15min because of just one index

2006-02-12 Thread Marc Morin
sioning, shared_buffers and checkpoint_segments are interconnected in weird and wonderful ways... Seldom have found "simple" solutions to performance problems. Marc > -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] On Behalf Of > Aaron Turne

[PERFORM] Trouble managing planner for timestamptz columns

2006-03-10 Thread Marc Morin
uld be nice if postgres could understand somehow that some columns are "dynamic" and that it's histogram could be stretched to the maximal values or some other technique for estimating rows to the right of the range of values in the histogram... Or have so

Re: [PERFORM] Trouble managing planner for timestamptz columns

2006-03-10 Thread Marc Morin
ailto:[EMAIL PROTECTED] > Sent: Friday, March 10, 2006 1:31 PM > To: Marc Morin > Cc: pgsql-performance@postgresql.org > Subject: Re: [PERFORM] Trouble managing planner for > timestamptz columns > > "Marc Morin" <[EMAIL PROTECTED]> writes: > > We tend

Re: [PERFORM] Query runs too long for indexed tables

2006-04-04 Thread Marc Morin
Wondering if Update firma1.rid set toode=null where toode is not null and not exists(select 1 from firma1.toode where toode=rid.toode); Would be faster... Problem appears to be the seqscan of seqscan... No? > -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] On B

Re: [PERFORM] Query runs too long for indexed tables

2006-04-04 Thread Marc Morin
Explain analyze would be nice ;-) > -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] On Behalf Of Andrus > Sent: Tuesday, April 04, 2006 3:37 PM > To: pgsql-performance@postgresql.org > Subject: [PERFORM] Query runs too long for indexed tables > > I have relative

Re: [PERFORM] Inserts optimization?

2006-04-14 Thread Marc Cousin
I hope I'm not going to say stupid things, but here's what i know (or i think i know :) ) about bacula + postgresql If I remember correctly (I allready discussed this with Kern Sibbald a while ago), bacula does each insert in its own transaction : that's how the program is done, and of course i

[PERFORM] LIKE query problem

2006-09-18 Thread Marc McIntyre
I'm having a problem with a simple query, that finds children of a node, using a materialized path to the node. The query: select n1.id from nodes n1, nodes n2 where n1.path like n2.path || '%' and n2.id = 14; QUERY PLAN

Re: [PERFORM] LIKE query problem

2006-09-18 Thread Marc McIntyre
Thanks Tom, Is that documented somewhere? I can't seem to see any mention of it in the docs. Tom Lane wrote: Marc McIntyre <[EMAIL PROTECTED]> writes: ... Is there a way to perform this efficiently in one query ? No, because you're hoping for an indexscan optimi

[PERFORM] Decreasing BLKSZ

2006-09-25 Thread Marc Morin
hat the i/o looks more like write full blocks   Thanks Marc    

Re: [PERFORM] Decreasing BLKSZ

2006-09-25 Thread Marc Morin
> Would it be possible to change the primary key to > (logtime,key)? This could help keeping the "working window" small. No, the application accessing the data wants all the rows between start and end time for a particular key value. > > Secondly, the real working set is smaller, as the rows

Re: [PERFORM] Decreasing BLKSZ

2006-09-25 Thread Marc Morin
> > The bottom line here is likely to be "you need more RAM" :-( Yup. Just trying to get a handle on what I can do if I need more than 16G Of ram... That's as much as I can put on the installed based of servers 100s of them. > > I wonder whether there is a way to use table partitioning t

Re: [PERFORM] Decreasing BLKSZ

2006-09-26 Thread Marc Morin
d to drill down So, I'd like my cake and eat it too... :-) I'd like to have my indexes built as rows are inserted into the partition so help with the drill down... > -Original Message- > From: Bucky Jordan [mailto:[EMAIL PROTECTED] > Sent: Tuesday, September 26,

Re: [PERFORM] Optimicing Postgres for SunSolaris10 on V240

2006-11-18 Thread Marc Cousin
Hi... Bacula does no transaction right now, so every insert is done separately with autocommit. Moreover, the insert loop for the main table is done by several individual queries to insert data in several tables (filename, dir, then file), so this is slow. There's work underway to speed that up,

[PERFORM] Problem with large query

2004-09-08 Thread Marc Cousin
Hi. I hope I'm not asking a too trivial question here... I'm having trouble with a (quite big) query, and can't find a way to make it faster. Here is the information : Tables : sces_vte -> 2753539 rows sces_art -> 602327 sces_fsf -> 8126 sces_frc -> 7763 sces_tps ->

Re: [PERFORM] Problem with large query

2004-09-08 Thread Marc Cousin
ember 2004 16:40, you wrote: > Marc Cousin <[EMAIL PROTECTED]> writes: > > I'm having trouble with a (quite big) query, and can't find a way to make it > > faster. > > Seems like it might help if the thing could use a HashAggregate instead > of sort/gro

Re: [PERFORM] Problem with large query

2004-09-08 Thread Marc Cousin
On Wednesday 08 September 2004 16:56, you wrote: > Marc Cousin <[EMAIL PROTECTED]> writes: > > The query has been generated by business objects ... i'ill try to suggest to the > > developpers to remove this constant (if they can)... > > The fields used by the

Re: [PERFORM] Does PostgreSQL run with Oracle?

2004-10-15 Thread Marc Slemko
On Fri, 15 Oct 2004 11:54:44 -0500, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: > My basic question to the community is "is PostgreSQL approximately as fast > as Oracle?" > > I don't want benchmarks, they're BS. I want a gut feel from this community > because I know many of you are in mixed shop

Re: [PERFORM] Optimizing PostgreSQL for Windows

2007-10-30 Thread Marc Schablewski
large datasets and other applications running. In my experience, shared_buffers are more important than work_mem. Have you tried increasing default_statistic_targets (eg to 200 or more) and after that running "analyze" on your tables or the entire database? Marc Christian Rengst

[PERFORM] strange plan choice

2008-02-19 Thread Cousin Marc
Hi, This occurs on postgresql 8.2.5. I'm a bit at loss with the plan chosen for a query : The query is this one : SELECT SULY_SAOEN.SAOEN_ID, SULY_SDCEN.SDCEN_REF, SULY_SDCEN.SDCEN_LIB, CSTD_UTI.UTI_NOM, CSTD_UTI.UTI_LIBC, SULY_SAOEN.SAOEN_DTDERNENVOI, SULY_SDCEN.SDCEN_DTLIMAP, SULY_PF

Re: [PERFORM] Delete performance again

2008-10-03 Thread Marc Mamin
Hi, Maybe you can try this syntax. I'm not sure, but it eventually perform better: delete from company_alias USING comprm where company_alias.company_id =comprm.id Cheers, Marc -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to

Re: [PERFORM] Query optimization

2008-11-30 Thread Marc Cousin
Le Sunday 30 November 2008 19:45:11 tmp, vous avez écrit : > I am struggeling with the following query which fetches a random subset > of 200 questions that matches certain tags within certain languages. > However, the query takes forever to evaluate, even though I have a > "limit 200" appended. An

[PERFORM] temp_tablespaces and RAID

2008-12-22 Thread Marc Mamin
Hello, To improve performances, I would like to try moving the temp_tablespaces locations outside of our RAID system. Is it a good practice ? Thanks, Marc Mamin -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http

Re: [PERFORM] Very specialised query

2009-03-27 Thread Marc Mamin
in my example is the best method though. Marc Mamin SELECT l1.id AS id1, l2.id AS id2 FROM location l1, location l2 WHERE l1.objectid = 1 AND (l2.start BETWEEN l1.start AND l1.end OR l1.start BETWEEN l2.start AND l2.end ) l1.start AND l2.

Re: [PERFORM] Very specialised query

2009-03-30 Thread Marc Mamin
makes sense: .. WHERE l2.start BETWEEN  l1.start AND l1.end .. UNION .. WHERE l1.start BETWEEN  l2.start AND l2.end .. The first clause being equivalent to AND l1.start <= l2.end AND l1.end >= l2.start AND l1.start <= l2.start I don't know how you have to dea

Re: [PERFORM] Very specialised query

2009-03-30 Thread Marc Mamin
Hello Matthew, Another idea: Are your objects limited to some smaller ranges of your whole interval ? If yes you may possibly reduce the ranges to search for while using an additional table with the min(start) max(end) of each object... Marc Mamin

Re: [PERFORM] Scalability in postgres

2009-06-04 Thread Marc Cousin
It's not that trivial with Oracle either. I guess you had to use shared servers to get to that amount of sessions. They're most of the time not activated by default (dispatchers is at 0). Granted, they are part of the 'main' product, so you just have to set up dispatchers, shared servers, circu

[PERFORM] Very big insert/join performance problem (bacula)

2009-07-13 Thread Marc Cousin
with bacula ... effective_cache_size = 800MB default_statistics_target = 1000 PostgreSQL is 8.3.5 on Debian Lenny I'm sorry for this very long email, I tried to be as precise as I could, but don't hesitate to ask me more. Thanks for helping. Marc Cousin -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] Very big insert/join performance problem (bacula)

2009-07-13 Thread Marc Cousin
We regularly do all of dbcheck. This is our real configuration, there are really lots of servers and lots of files (500 million files backed up every month). But thanks for mentionning that. The thing is we're trying to improve bacula with postgresql in order to make it able to bear with this

Re: [PERFORM] Very big insert/join performance problem (bacula)

2009-07-13 Thread Marc Cousin
> > While this is not your questions, I still noticed you seem to be on 8.3 - > it might be a bit faster to use GROUP BY instead of DISTINCT. It didn't do a big difference, I already tried that before for this query. Anyway, as you said, it's not the query having problems :) > Your effective_cac

Re: [PERFORM] Very big insert/join performance problem (bacula)

2009-07-14 Thread Marc Cousin
Le Tuesday 14 July 2009 10:15:21, vous avez écrit : > Marc Cousin wrote: > >> Your effective_cache_size is really small for the system you seem to > >> have - its the size of IO caching your os is doing and uses no resources > >> itself. And 800MB of that on a system

Re: [PERFORM] Very big insert/join performance problem (bacula)

2009-07-14 Thread Marc Cousin
Le Tuesday 14 July 2009 10:23:25, Richard Huxton a écrit : > Marc Cousin wrote: > > Temporarily I moved the problem at a bit higher sizes of batch by > > changing random_page_cost to 0.02 and seq_page_cost to 0.01, but I feel > > like an apprentice sorcerer with this, as I

Re: [PERFORM] Very big insert/join performance problem (bacula)

2009-07-15 Thread Marc Cousin
ar work. There are other things I am thinking of : maybe it would be better to have sort space on another (and not DBRD'ded) raid set ? we have a quite cheap setup right now for the database, and I think maybe this would help scale better. I can get a filesystem in another volume group, whi

Re: [PERFORM] Very big insert/join performance problem (bacula)

2009-07-15 Thread Marc Cousin
Le Wednesday 15 July 2009 15:45:01, Alvaro Herrera a écrit : > Marc Cousin escribió: > > There are other things I am thinking of : maybe it would be better to > > have sort space on another (and not DBRD'ded) raid set ? we have a quite > > cheap setup right now for the d

Re: [PERFORM] Very big insert/join performance problem (bacula)

2009-07-15 Thread Marc Cousin
Le Thursday 16 July 2009 01:56:37, Devin Ben-Hur a écrit : > Marc Cousin wrote: > > This mail contains the asked plans : > > Plan 1 > > around 1 million records to insert, seq_page_cost 1, random_page_cost 4 > > > > -> Hash (cost=425486.72..425486.72

Re: [PERFORM] Very big insert/join performance problem (bacula)

2009-07-16 Thread Marc Cousin
On Thursday 16 July 2009 07:20:18 Marc Cousin wrote: > Le Thursday 16 July 2009 01:56:37, Devin Ben-Hur a écrit : > > Marc Cousin wrote: > > > This mail contains the asked plans : > > > Plan 1 > > > around 1 million records to insert,

Re: [PERFORM] Very big insert/join performance problem (bacula)

2009-07-16 Thread Marc Cousin
Le Thursday 16 July 2009 22:07:25, Kevin Grittner a écrit : > Marc Cousin wrote: > > the hot parts of these 2 tables are extremely likely to be in the > > database or linux cache (buffer hit rate was 97% in the example > > provided). Moreover, the first two queries of

Re: [PERFORM] Very big insert/join performance problem (bacula)

2009-07-16 Thread Marc Cousin
Le Thursday 16 July 2009 23:54:54, Kevin Grittner a écrit : > Marc Cousin wrote: > > to sum it up, should I keep these values (I hate doing this :) ) ? > > Many people need to set the random_page_cost and/or seq_page_cost to > reflect the overall affect of caching on the act

Re: [PERFORM] Very big insert/join performance problem (bacula)

2009-07-23 Thread Marc Cousin
> It really has very little impact. It only affects index scans, and > even then only if effective_cache_size is less than the size of the > table. > > Essentially, when this kicks in, it models the effect that if you are > index scanning a table much larger than the size of your cache, you > migh

Re: [PERFORM] Why is PostgreSQL so slow on Windows ( Postgres 8.3.7) version

2009-08-03 Thread Marc Cousin
The few 'obvious' things I see : ID and POLLID aren't of the same type (numeric vs bigint) TTIME isn't indexed. And as a general matter, you should stick to native datatypes if you don't need numeric. But as said in the other answer, maybe you should redo this schema and use more consistent d

Re: [PERFORM] optimizing query with multiple aggregates

2009-10-26 Thread Marc Mamin
Hello, I didn't try it, but following should be slightly faster: COUNT( CASE WHEN field >= x AND field < y THEN true END) intead of SUM( CASE WHEN field >= x AND field < y THEN 1 ELSE 0 END) HTH, Marc Mamin From: pgsql-performance-ow.

Re: [PERFORM] Query improvement

2011-05-03 Thread Marc Mamin
CT p.page_id FROM mediawiki.page p JOIN mediawiki.revision r on (p.page_id=r.rev_page) JOIN mediawiki.pagecontent ss on (r.rev_id=ss.old_id) WHERE (ss.textvector @@ (to_tsquery('fotbal'))) HTH, Marc Mamin -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] Postgres for a "data warehouse", 5-10 TB

2011-09-13 Thread Marc Mamin
dat); exception when unique_violation then update t set dat = a_dat where id = a_id and dat <> a_dat; return 0; end; elsif not test then update t set dat = a_dat where id = a_id; return 0; end if; return 1; best regards, Marc Mamin -Ursp

Re: [PERFORM] overzealous sorting?

2011-09-26 Thread Marc Cousin
#x27;cdr'::text) > -> Bitmap Index Scan on tevent_cdr_timestamp > (cost=0.00..57.31 rows=2477 width=0) (actual time=0.404..0.404 > rows=2480 loops=1) > Index Cond: (("timestamp" >= '2011-09-09 > 22:00:00+10'::timestamp with

Re: [PERFORM] overzealous sorting?

2011-09-27 Thread Marc Cousin
Le Tue, 27 Sep 2011 12:45:00 +1000, anthony.ship...@symstream.com a écrit : > On Monday 26 September 2011 19:39, Marc Cousin wrote: > > Because Index Scans are sorted, not Bitmap Index Scans, which > > builds a list of pages to visit, to be then visited by the Bitmap >

Re: [PERFORM] overzealous sorting?

2011-09-27 Thread Marc Cousin
Le Tue, 27 Sep 2011 19:05:09 +1000, anthony.ship...@symstream.com a écrit : > On Tuesday 27 September 2011 18:54, Marc Cousin wrote: > > The thing is, the optimizer doesn't know if your data will be in > > cache when you will run your query… if you are sure most of your >

Re: [PERFORM] Postgres array parser

2011-12-14 Thread Marc Mamin
ACH version. CREATE OR REPLACE FUNCTION input_value_un (in_inputs numeric[], in_input_nr numeric) RETURNS numeric AS $BODY$ SELECT u[1][2] FROM unnest($1, SLICE =1) u WHERE u[1][1]=in_input_nr LIMIT 1; $BODY$ LANGUAGE sql IMMUTABLE; best regards, Marc Ma

Re: [PERFORM] Postgres array parser

2011-12-14 Thread Marc Mamin
USD 12.0 => x overview - ---- a {(EUR,20.0), (CHF,7.5)} b {(USD,10.0)} regards, Marc Mamin > On 12/14/2011 11:21 AM, Marc Mamin wrote: > > Hello, > > > > For such cases (see below), it would be nice to have an unnest > function that on

Re: [PERFORM] Duplicate deletion optimizations

2012-01-06 Thread Marc Eberhard
t only worth doing this for a large number of inserted/updated elements? What if the number of inserts/updates is only a dozen at a time for a large table (>10M entries)? Thanks, Marc -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] Duplicate deletion optimizations

2012-01-06 Thread Marc Eberhard
On 6 January 2012 20:38, Samuel Gendler wrote: > On Fri, Jan 6, 2012 at 12:22 PM, Marc Eberhard > wrote: >> On 6 January 2012 20:02, Samuel Gendler wrote: >> > Have you considered doing the insert by doing a bulk insert into a temp >> > table and then pulling rows

Re: [PERFORM] Duplicate deletion optimizations

2012-01-07 Thread Marc Mamin
imp(t_value,t_record,output_id) where t_imp.id is not null. regards, Marc Mamin -Ursprüngliche Nachricht- Von: pgsql-performance-ow...@postgresql.org im Auftrag von Jochen Erwied Gesendet: Sa 1/7/2012 12:57 An: anto...@inaps.org Cc: pgsql-performance@postgresql.org Betreff: Re: [PERF

Re: [PERFORM] Duplicate deletion optimizations

2012-01-07 Thread Marc Eberhard
he main table with a short one line SQL delete statement before the interpolation and merge. > Tada. :- > Enjoy ! I certainly will. Many thanks for those great lines of SQL! Hope you recover from your flu quickly! All the best, Marc -- Sent via pgsql-performance mailing list (pgsql-

[PERFORM] How to remove a table statistics ?

2012-01-31 Thread Marc Mamin
l fix that model, but am first looking for a quick way to restore performance on our production servers. best regards, Marc Mamin

Re: [PERFORM] How to remove a table statistics ?

2012-01-31 Thread Marc Mamin
sce(toplevelrid,msoffset::varchar); without stats: http://explain.depesz.com/s/qPg with stats: http://explain.depesz.com/s/88q aserr_20120125_tvi: GIN Index on my_func(.,.,.,.,.,.) best regards, Marc Mamin > -Original Message- > From: pgsql-performance-ow...@postgre

[PERFORM] text search: tablescan cost for a tsvector

2012-02-06 Thread Marc Mamin
position. So I repeated the test with an additional search term at the last position, but without significant change: (result from the 6. test below) without analyze: http://explain.depesz.com/s/6At with analyze:http://explain.depesz.com/s/r3B best regards, Marc Mamin Here all my results

Re: [PERFORM] text search: tablescan cost for a tsvector

2012-02-29 Thread Marc Mamin
> Von: Robert Haas [mailto:robertmh...@gmail.com] > Gesendet: Mi 2/29/2012 7:32 > > On Mon, Feb 6, 2012 at 6:05 AM, Marc Mamin wrote: > > without analyze: http://explain.depesz.com/s/6At > > with analyze:http://explain.depesz.com/s/r3B ... > The problem seems to

[PERFORM] Partitioning / Strange optimizer behaviour

2012-03-05 Thread Marc Schablewski
t (cost=0.00..20944125.72 rows=1031020672 width=8) I would expect this to run half an hour or so, completely overloading the server... Any Ideas? Kind regards Marc -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance

Re: [PERFORM] Partitioning / Strange optimizer behaviour

2012-03-05 Thread Marc Schablewski
Thanks for pointing me to that article. I totally forgot that the postgres wiki existed. Updating is not an option at the moment, but we'll probably do so in the future. Until then I can live with the workaround. Kind regards, Marc -- Sent via pgsql-performance mailing list (

Re: [PERFORM] Performance of a large array access by position (tested version 9.1.3)

2012-06-26 Thread Marc Mamin
ble t2 ( _array int[]); alter table t2 alter _array set storage external; insert into t2 SELECT ARRAY(SELECT * FROM generate_series(1,500)); explain analyze SELECT _array[1] FROM t1; Total runtime: 0.125 ms explain analyze SELECT _array[1] FROM t2; Total runtime: 8.649 ms best regards, M

  1   2   >