[SQL] Profiling postgresql queries
Hi all, I was wondering if there is any GUI or command line based tool running on windows that could help me to profile PostgreSQL queries performance. It should be able to do live profiling since I want to profile the accesses that my application is doing on the Postgres database. Thanks to all Nacef
Re: [SQL] Profiling postgresql queries
am Wed, dem 09.07.2008, um 10:22:09 +0200 mailte Nacef LABIDI folgendes: > Hi all, > I was wondering if there is any GUI or command line based tool running on > windows that could help me to profile PostgreSQL queries performance. It > should You can use EXPLAIN ANALYSE to analyse your queries (if they use indexes, execution plan, estimated versus real rows etc.) > be able to do live profiling since I want to profile the accesses that my > application is doing on the Postgres database. You can log slow queries with a setting in your postgresql.conf: log_min_duration_statement = N with N means the duration time in ms, it loggs all queries with an execution time more than that. Regards, Andreas -- Andreas Kretschmer Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header) GnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net -- Sent via pgsql-sql mailing list ([email protected]) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] selecting N record for each group
There are many ways to achieve this: http://rickosborne.org/blog/index.php/2008/01/07/sql-getting-top-n-rows-for-a-grouped-query/ http://www.xaprb.com/blog/2006/12/07/how-to-select-the-firstleastmax-row-per-group-in-sql/ http://weblogs.sqlteam.com/jeffs/archive/2007/03/30/More-SQL-Server-2005-Solutions.aspx I was wondering if any of the available methods perform better on postgresql: The actual situation is: create table type( id int primary key, name varchar(32) ); create table list( id int primary key, name varchar(32), type int references type (id), inserted timestamp ); I'd like to get the type.name, list.name of the N newest items for a set of types (eg. where type.id in (1,2,4)). the list of types I'm interested in is composed of 20-100 types on 10K distinct types. list table has 1M record N is in the order of 5-20. I could use plpgsqql too but I'm not sure it could make any better compared to plain sql. thanks -- Ivan Sergio Borgonovo http://www.webthatworks.it -- Sent via pgsql-sql mailing list ([email protected]) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] Re: Best practice for long-lived journal tables: bigint or recycling IDs?
> > > 3. Deal with wraparound by ensuring that the applications behave sanely
> >
> > Wrap-around?
> >
> > Exceeding the max size of "int" looks more like a brick wall than
> > wrap-around to me:
> >
> > insert into t values (2147483648);
> > ERROR: integer out of range
>
> Hmm, you can alter the sequence so that it wraps around at the point it
> reaches INT_MAX. So inserting this number would never actually happen.
Ah, that does look like the best solution. I'll confirm that will work for our
cases.
Thanks!
Mark
> alvherre=# create table t (a serial);
> NOTICE: CREATE TABLE créera des séquences implicites « t_a_seq » pour la
> colonne serial « t.a »
> CREATE TABLE
> alvherre=# alter sequence t_a_seq maxvalue 2147483647;
> ALTER SEQUENCE
> alvherre=# alter sequence t_a_seq cycle;
> ALTER SEQUENCE
> alvherre=# select setval('t_a_seq', 2147483645);
>setval
>
> 2147483645
> (1 ligne)
>
> alvherre=# insert into t default values;
> INSERT 0 1
> alvherre=# insert into t default values;
> INSERT 0 1
> alvherre=# insert into t default values;
> INSERT 0 1
> alvherre=# insert into t default values;
> INSERT 0 1
> alvherre=# insert into t default values;
> INSERT 0 1
> alvherre=# select * from t;
> a
>
> 2147483646
> 2147483647
> 1
> 2
> 3
> (5 lignes)
--
Sent via pgsql-sql mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql
