Re: [PERFORM] Tables Without OIDS and its effect

2003-12-12 Thread Neil Conway
Sai Hertz And Control Systems <[EMAIL PROTECTED]> writes:
> I have created my tables without OIDS  now my doubts are :
> 1.  Will this speed up the data insertion process

Slightly. It means that each inserted row will be 4 bytes smaller (on
disk), which in turn means you can fit more tuples on a page, and
therefore you'll need fewer pages and less disk space. However, I'd be
surprised if the performance improvement is very significant.

> 2. Though I have not written any code in my any of the pgsql functions
> which depend on OIDS
>   1. Will without OIDS the functions behave internally differently
>   2. Will my application break at any point

No.

BTW, we intend to phase out the use of OIDs for user tables in the
long term. There have been a few threads on -hackers that discuss the
plans for doing this.

-Neil


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] Update on putting WAL on ramdisk/

2003-12-12 Thread William Yu
Russell Garrett wrote:
WAL on single drive: 7.990 rec/s
WAL on 2nd IDE drive: 8.329 rec/s
WAL on tmpfs: 13.172 rec/s
A huge jump in performance but a bit scary having a WAL that can
disappear at any time. I'm gonna workup a rsync script and do some
power-off experiments to see how badly it gets mangled.


Surely this is just equivalent to disabling fsync? If you put a WAL on a
volatile file system, there's not a whole lot of point in having one at all.
These tests were all with fsync off.

And no, it's not equivalent to fsync off since the WAL is always written 
immediately regardless of fsync setting.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [PERFORM] Update on putting WAL on ramdisk/

2003-12-12 Thread Russell Garrett
> WAL on single drive: 7.990 rec/s
> WAL on 2nd IDE drive: 8.329 rec/s
> WAL on tmpfs: 13.172 rec/s
>
> A huge jump in performance but a bit scary having a WAL that can
> disappear at any time. I'm gonna workup a rsync script and do some
> power-off experiments to see how badly it gets mangled.

Surely this is just equivalent to disabling fsync? If you put a WAL on a
volatile file system, there's not a whole lot of point in having one at all.


Russ Garrett[EMAIL PROTECTED]
  http://last.fm


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[PERFORM] Update on putting WAL on ramdisk/

2003-12-12 Thread William Yu
Some arbitrary data processing job

WAL on single drive: 7.990 rec/s
WAL on 2nd IDE drive: 8.329 rec/s
WAL on tmpfs: 13.172 rec/s
A huge jump in performance but a bit scary having a WAL that can 
disappear at any time. I'm gonna workup a rsync script and do some 
power-off experiments to see how badly it gets mangled.

This could be good method though when you're dumping and restore an 
entire DB. Make a tmpfs mount, restore, shutdown DB and then copy the 
WAL back to the HD.

I checked out the SanDisk IDE FlashDrives. They have a write cycle life 
of 2 million. I'll explore more expensive solid state drives.

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [PERFORM] [HACKERS] fsync method checking

2003-12-12 Thread Tom Lane
Manfred Spraul <[EMAIL PROTECTED]> writes:
> One advantage of a seperate write and fsync call is better performance 
> for the writes that are triggered within AdvanceXLInsertBuffer: I'm not 
> sure how often that's necessary, but it's a write while holding both the 
> WALWriteLock and WALInsertLock. If every write contains an implicit 
> sync, that call would be much more expensive than necessary.

Ideally that path isn't taken very often.  But I'm currently having a
discussion off-list with a CMU student who seems to be seeing a case
where it happens a lot.  (She reports that both WALWriteLock and
WALInsertLock are causes of a lot of process blockages, which seems to
mean that a lot of the WAL I/O is being done with both held, which would
have to mean that AdvanceXLInsertBuffer is doing the I/O.  More when we
figure out what's going on exactly...)

regards, tom lane

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [PERFORM] Performance related to size of tables

2003-12-12 Thread Mark Kirkwood
If you want to speed up the elapsed times, then the first thing would be 
to attempt to reduce the IO using some indexes, e.g. on test1(anumber), 
test2(anumber), test3((anumber%13)), test3((anumber%5)) and 
test4((anumber%27))

However if you wish to keep hammering the IO then the you would not use 
any indexes. However elapsed times for operations like:

CREATE TABLE test4 AS SELECT ... FROM test1 JOIN test2 ON 
test1.anumber=test2.anumber;

are going to increase non linearly with the size of the source table 
test1 (unless there are indexes on the anumber columns).

I think this particular test is designed as a testbed for measuring IO 
performance - as opposed to Postgresql performance.

regards

Mark

[EMAIL PROTECTED] wrote:

Hi everyone,
I found that performance get worse as the size of a given table 
increases. I mean, for example I´ve just run some scripts shown in 

http://www.potentialtech.com/wmoran/postgresql.php

I understand that those scripts are designed to see the behavior of postgresql under different filesystems. However, since them generate
a lot of I/O activity, I think they can be used to adjust some 
configuration parameters. In that way, I increased the number of tuples inserted in the initial table to 200 and 300. What 
I saw is that the running time goes from 3 min., to 11 min. My question is, is it possible to use that test to tune 
some parameters?, if the answer is yes, what parameters should I change to get shorter running times?

 



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[PERFORM] Performance related to size of tables

2003-12-12 Thread nbarraza
Hi everyone,
I found that performance get worse as the size of a given table 
increases. I mean, for example I´ve just run some scripts shown in 

http://www.potentialtech.com/wmoran/postgresql.php

I understand that those scripts are designed to see the behavior of postgresql under 
different filesystems. However, since them generate
a lot of I/O activity, I think they can be used to adjust some 
configuration parameters. In that way, I increased the number of tuples inserted in 
the initial table to 200 and 300. What 
I saw is that the running time goes from 3 min., to 11 min. My question is, is it 
possible to use that test to tune 
some parameters?, if the answer is yes, what parameters should I change to get shorter 
running times?

Thanks a lot

Nestor

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] Hardware suggestions for Linux/PGSQL server

2003-12-12 Thread William Yu
Shridhar Daithankar wrote:
FWIW, there are only two pieces of software that need 64bit aware for a 
typical server job. Kernel and glibc. Rest of the apps can do fine as 32 
bits unless you are oracle and insist on outsmarting OS.

In fact running 32 bit apps on 64 bit OS has plenty of advantages like 
effectively using the cache. Unless you need 64bit, going for 64bit 
software is not advised.
This is a good point. While doing research on this matter a few months 
back, I saw comments by people testing 64-bit MySQL that some operations 
would run faster and some slower due to the use of 64-bit datatypes 
versus 32-bit. The best solution in the end is probably to run 32-bit 
Postgres under a 64-bit kernel -- unless your DB tends to have a lot of 
64-bit datatypes.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [PERFORM] Query plan - now what?

2003-12-12 Thread Tom Lane
David Shadovitz <[EMAIL PROTECTED]> writes:
> If you think that you or anyone else would invest the time, I could post more
> info.

I doubt you will get any useful help if you don't post more info.

> I will also try Shridhar's suggestions on statistics_target and 
> enable_hash_join.

It seemed to me that the row estimates were not so far off that I would
call it a statistical failure; you can try increasing the stats target
but I'm not hopeful about that.  My guess is that you will have to look
to revising either the query or the whole database structure (ie,
merging tables).  We'll need the info I asked for before we can make
any recommendations, though.

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[PERFORM] Tables Without OIDS and its effect

2003-12-12 Thread Sai Hertz And Control Systems
Dear all ,

I have created my tables without OIDS  now my doubts are :
1.  Will this speed up the data insertion process
2. Though I have not written any code in my any of the pgsql functions 
which depend on OIDS
 1. Will without OIDS the functions behave internally differently
 2. Will my application break at any point
3. I decided to  work with out OIDS because
 1. It has a limit of  -2147483648 to +2147483647
 2 Due to this limitation I would not like to drop recreate my 
database because it is a bit difficult/dirty process

All links and suggestion pertaining to OIDS are most welcome my mail box 
is at your  disposal and dont  hassitate to
drop a two line comment.
---
My Sys Config:
RH9.0
PostgreSQL  7.3.4
GCC 3.2.2
PHP  4.3.4
--
Regards,
V Kashyap



---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [PERFORM] [ADMIN] ODBC Driver generates a too big "windows swap file" and

2003-12-12 Thread scott.marlowe
On Fri, 12 Dec 2003, Rhaoni Chiu Pereira wrote:


Hi, is there a switch in your pgsql/odbc connector to enable cursors?  If 
so, try turning that on.


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [PERFORM] Query plan - now what?

2003-12-12 Thread David Shadovitz
> This is not very informative when you didn't show us the query nor
> the table schemas..

> BTW, what did you do with this, print and OCR it?

Tom,

I work in a classified environment, so I had to sanitize the query plan, print 
it, and OCR it.  I spent a lot of time fixing typos, but I guess at midnight my 
eyes missed some.  This hassle is why I posted neither the query nor the 
schema.  The database is normalized, though, but my use of animal names of 
couse masks this.

If you think that you or anyone else would invest the time, I could post more 
info.

I will also try Shridhar's suggestions on statistics_target and 
enable_hash_join.

Thanks.
-David

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [PERFORM] Query plan - now what?

2003-12-12 Thread Tom Lane
David Shadovitz <[EMAIL PROTECTED]> writes:
> Well, now that I have the plan for my slow-running query, what do I
> do?

This is not very informative when you didn't show us the query nor
the table schemas (column datatypes and the existence of indexes
are the important parts).  I have a feeling that you might be well
advised to fold the multiple tables into one "animals" table, but
there's not enough info here to make that recommendation for sure.

BTW, what did you do with this, print and OCR it?  It's full of the
most bizarre typos ... mostly "l" for "1", but others too ...

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[PERFORM] ODBC Driver generates a too big "windows swap file" and it's too slow

2003-12-12 Thread Rhaoni Chiu Pereira
Hi List,

   First of all, I tried to subcribe the ODBC list but it seems that the 
subscription's link is broken ! So here it goes:

   I have a delphi software use ttable components that converts dbf information 
to PostgreSQL an Oracle Databases. My problem is that PostgreSQL is too slow, 
the oracle db makes this convertion in 3.45 min and the Pg db makes int 29 min.
   The software is the same ( only the database reference is diferent ) , this 
sotware uses BDE to access the database with oracle native driver and using 
postgreSQL odbc driver version 5. Both databases are in the same machine ( 
Pentium 4 1.8Ghz, 384MB RAM DDR ) running RH 9 , Oracle 9i and PostgreSQL 7.3.2-
3.
   When I ran this conversion I "snorted" the communication between the server 
and the station to see how it does the sql requests , here it goes:

ORACLE :

- select owner, object_name, object_type, created from sys.all_objects where 
object_type in ('TABLE', 'VIEW' )  and owner = 'VENDAS' and object_name 
= 'FTCOFI00' order by 1 ASC, 2 ASC

- select owner, index_name, uniqueness from sys.all_indexes where table_owner 
= 'VENDAS' and table_name = 'FTCOFI00' order by owner ASC, index_name ASC

- select column_name from sys.all_ind_columns where index_owner = 'VENDAS' and 
index_name = 'FTCOFI01' order by column_position ASC

- 
SELECT "EMP" ,"FIL" ,"CODIGO_FISCAL" ,"CODIGO_FISCAL_ESTORNO" ,"DESCRICAO_FISCAL
" ,"CODIGO_OPERACIONAL" ,"DESCRICAO_USUARIO" ,"COD_NATIPI" ,"COD_NATIBGE" ,"EXTO
_NF1" ,"TEXTO_NF2" ,"NF_NORMALDIF" ,"NF_TRANSFILIAL" ,"COD_FILIAL" ,"COD_LANCTO_
FILIAL" ,"NF_EXPORTACAO_DIRETA" ,"NF_EXPORTACAO_INDIRETA" ,"NF_SIMPREMESSA" ,"NF
_DEVOLUCAO" ,"NF_ENTRADA" ,"NF_REPOSICAO" ,"NF_OUTRASERIE" ,"NF_CONSIGNACAO" ,"N
F_PRODGRATIS" ,"NF_FATURANTECIP" ,"NF_DIFBASEICM" ,"NF_DIF_VALORICM" ,"NF_DIFBAS
EIPI" ,"NF_DIFVALORIPI" ,"NF_DIFPRECO" ,"BLOQ_CREDITO" ,"LIBERA_CREDITO" ,"VER_P
ARAM_VENDAS" ,"ENTRA_COBRANCA" ,"BASECALC_VLRBRUTO" ,"DESCNF_REFICM" ,"ALIQICM_I
GUALEST" ,"COD_TRIBICM" ,"COD_TRIBIPI" ,"ATUAL_ESTOQUE" ,"ATUAL_FABRICACAO" ,"AT
UAL_FATURA" ,"ATUAL_OUTENTR" ,"ATUAL_OUTSAIDA" ,"ATUAL_TRANFIL" ,"ATUAL_SEMIACAB
" ,"ATUAL_CARTPED" ,"ATUAL_ENTRSAID" ,"REV_CUSTMEDIO" ,"DIGITAR_FISICO" ,"DIGITA
R_FINANCEIRO" ,"USAR_CUSTO_CMU_INFORMAR" ,"GRUPO_FATURAMENTO" ,"TIPO_NF" ,"RESUM
O_FISCAL_CODIGO" ,
"ATUAL_DISTRIB" ,"IMPR_OBS_NF_REG_ES" ,"DIFE_RECEITA" ,"COD_LANCTO" ,"SITUACAO" 
 FROM "FTCOFI00" ORDER BY  "EMP" ASC , "FIL" ASC , "CODIGO_FISCAL" ASC

- select owner, object_name, object_type, created from sys.all_objects where 
object_type in ('TABLE', 'VIEW')  and owner = 'VENDAS' and object_name 
= 'FTCLCR00' order by 1 ASC, 2 ASC

- select owner, index_name, uniqueness from sys.all_indexes where table_owner 
= 'VENDAS' and table_name = 'FTCLCR00' order by owner ASC, index_name ASC

- select column_name from sys.all_ind_columns where index_owner = 'VENDAS' and 
index_name = 'FTCLCR01' order by column_position ASC

- select column_name from sys.all_ind_columns where index_owner = 'VENDAS' and 
index_name = 'FTCLCR02' order by column_position ASC

- select column_name from sys.all_ind_columns where index_owner = 'VENDAS' and 
index_name = 'FTCLCR03' order by column_position ASC

- select column_name from sys.all_ind_columns where index_owner = 'VENDAS' and 
index_name = 'FTCLCR04' order by column_position ASC

- select column_name from sys.all_ind_columns where index_owner = 'VENDAS' and 
index_name = 'FTCLCR05' order by column_position ASC

- select column_name from sys.all_ind_columns where index_owner = 'VENDAS' and 
index_name = 'FTCLCR06' order by column_position ASC

- select column_name from sys.all_ind_columns where index_owner = 'VENDAS' and 
index_name = 'FTCLCR07' order by column_position ASC

- 
SELECT "EMP" ,"FIL" ,"TIPO_CADASTRO" ,"CODIGO" ,"RAZAO_SOCIAL" ,"NOME_FANTASIA" 
,"EMP_ENDERECO" ,"EMP_NRO" ,"EMP_COMPLEMENTO" ,"EMP_BAIRRO" ,"EMP_CIDADE" ,"EMP_
ESTADO" ,"EMP_CEP" ,"EMP_PAIS" ,"EMP_EAN" ,"COB_ENDERECO" ,"COB_NRO" ,"COB_COMPL
EMENTO" ,"COB_BAIRRO" ,"COB_CIDADE" ,"COB_ESTADO" ,"COB_CEP" ,"COB_PAIS" ,"COB_E
AN" ,"ENT_ENDERECO" ,"ENT_NRO" ,"ENT_COMPLEMENTO" ,"ENT_BAIRRO" ,"ENT_CIDADE" ,"
ENT_ESTADO",
"ENT_CEP" ,"ENT_PAIS" ,"ENT_EAN" ,"LOJA_EAN" ,"TELEFONE" ,"CELULAR" ,"FAX" ,"EMA
IL" ,"SITE" ,"CONTATO_NOME" ,"CONTATO_TELEFONE" ,"CONTATO_EMAIL" ,"CONTATO_DDMM_
ANIV" ,"SITUACAO_CADASTRO" ,"OBSERVACOES" ,"DATA_CADASTRO" ,"DATA_ALTERACAO" ,"T
IPO_CONTRIBUINTE" ,"CODIGO_CONTRIBUINTE" ,"TIPO_INSCRICAO" ,"CODIGO_INSCRICAO","
CODIGO_REDE" ,"CODIGO_TIPO_CLIENTE" ,"CODIGO_GRUPO_CLIENTE" ,"CODIGO_SUFRAMA" ,"
DATA_VALIDADE_SUFRAMA" ,"LIMITE_CREDITO" ,
"MARCA" ,"CLASSE" ,"BANDEIRA_CLIENTE" ,"CODIGO_TIPO_CREDOR" ,"NOME_REPRESENTANTE
" ,"TIPO_CONDICAO_PGTO" ,"PRAZO_PGTO_01" ,"PRAZO_PGTO_02" ,"PRAZO_PGTO_03" ,"COD
IGO_MOEDA_COMPRA" ,"FATOR_QUALIDADE" ,"DESPESA_FINANCEIRA" ,"CODIGO_DARF" ,"CODI
GO_NATUREZA_RENDIMENTO" ,"CONTA_CORRENTE_BANCO" ,"CONTA_CORRENTE_AGENCIA" ,"CONT
A_CORRENTE_NUMERO" ,"FORNECEDOR_SULPLASTIC" ,"SUFRAM

Re: [PERFORM] TRUNCATE veeeery slow compared to DELETE in 7.4

2003-12-12 Thread Hartmut Raschick
for the clearer understanding: this is NOT about TRUNCATE being
slow "as such" vs. DELETE, but about a change in the order of
a (...) magnitude from 7.3.4 to 7.4...

here's some more info, plus test results w/a "full" db:

300 tables, 2 pieces of modelled hw, so there's one table
w/2 entries, each model has a special table (per type), too;
so, entries over all of them sum up to 2; not all types are
present.
plus: some types (w/not many instances) have "very special" tables,
too, these sometimes w/lots of columns 500-1600...

alone on a sun fire-280 w/2 U-IIIi cpu's (well, only need one...):
all the time of the test, no vacuum anything was performed,
thus - by the book - making things worse... for the DELETE case.

7.4:

"full" database:
TRUNCATE: 0.03u 0.03s 1:21.40 0.0%
DELETE: 0.05u 0.01s 0:04.46 1.3%

empty database:
TRUNCATE:0.02u 0.05s 1:21.00 0.0%
DELETE: 0.04u 0.04s 0:01.32 6.0%

now for 7.3.4 database server (same machine, of cause):
--
"full" database:
TRUNCATE: 0.04u 0.04s 0:03.79 2.1%
DELETE: 0.03u 0.03s 0:06.51 0.9%

empty database:
TRUNCATE: 0.04u 0.05s 0:01.51 5.9%
DELETE: 0.01u 0.02s 0:01.00 3.0%

what can i say...
...please find the attached configs.

i rally don't think this can be explained by table/index
complexity, it's the _same_ schema and contents for both cases,
they both were started w/createdb, they both were filled the same
way (by our server prog), there was no vacuum nowhere, test execution
order was the same in both cases.

P.S.: Mon pessimisme va jusqu'à suspecter la sincérité des pessimistes.
- Jean Rostand (1894-1977), Journal d'un caractère, 1931

-- 
Hartmut "Hardy" Raschick / Dept. t2
ke Kommunikations-Elektronik GmbH
Wohlenberstr. 3, 30179 Hannover
Phone: ++49 (0)511 6747-564
Fax: ++49 (0)511 6747-340
e-Mail: [EMAIL PROTECTED]
http://www.ke-elektronik.de#
# PostgreSQL configuration file
# -
#
# This file consists of lines of the form:
#
#   name = value
#
# (The '=' is optional.) White space may be used. Comments are introduced
# with '#' anywhere on a line. The complete list of option names and
# allowed values can be found in the PostgreSQL documentation. The
# commented-out settings shown in this file represent the default values.
#
# Any option can also be given as a command line switch to the
# postmaster, e.g. 'postmaster -c log_connections=on'. Some options
# can be changed at run-time with the 'SET' SQL command.
#
# This file is read on postmaster startup and when the postmaster
# receives a SIGHUP. If you edit the file on a running system, you have 
# to SIGHUP the postmaster for the changes to take effect, or use 
# "pg_ctl reload".


#


#
#   Connection Parameters
#
#tcpip_socket = false
tcpip_socket = true
#ssl = false

#max_connections = 32
max_connections = 128
#superuser_reserved_connections = 2

#port = 5432 
port = 5433 
#hostname_lookup = false
#show_source_port = false

#unix_socket_directory = ''
#unix_socket_group = ''
#unix_socket_permissions = 0777 # octal

#virtual_host = ''

#krb_server_keyfile = ''


#
#   Shared Memory Size
#
#shared_buffers = 64# min max_connections*2 or 16, 8KB each
shared_buffers = 256   # 2*max_connections, min 16, typically 8KB each
#max_fsm_relations = 1000   # min 10, fsm is free space map, ~40 bytes
#max_fsm_pages = 1  # min 1000, fsm is free space map, ~6 bytes
max_fsm_pages = 5  # min 1000, fsm is free space map, ~6 bytes
#max_locks_per_transaction = 64 # min 10
#wal_buffers = 8# min 4, typically 8KB each
wal_buffers = 40  # min 4, typically 8KB each

#
#   Non-shared Memory Sizes
#
#sort_mem = 1024# min 64, size in KB
sort_mem = 16384# min 64, size in KB
#vacuum_mem = 8192  # min 1024, size in KB
vacuum_mem = 16384  # min 1024, size in KB


#
#   Write-ahead log (WAL)
#
#checkpoint_segments = 3# in logfile segments, min 1, 16MB each
checkpoint_segments = 20# in logfile segments, min 1, 16MB each
#checkpoint_timeout = 300   # range 30-3600, in seconds
checkpoint_timeout = 600# range 30-3600, in seconds
#
#commit_delay = 0   # range 0-10, in microseconds
commit_delay = 10# range 0-10, in microseconds
#commit_siblings = 5# range 1-1000
commit_siblings = 100   # range 1-1000
#
#fsync = true
fsync = false
#wal_sync_method = fsync# the default varies across platforms:
wal_sync_method = fdatasync # the default varies across platforms:
#   # fsync, fdatasync, open_sync, or open_datasync
#wal_debug = 0  # range 0-16


#
#   Optimizer Parameters
#
#enable_seqscan = true
enable_seqscan = false
#enable_indexscan = true
#enable_tidscan = true
#enable_sort = true
#enable_nestloop = true
#enable_mergejoin 

Re: [PERFORM] Query plan - now what?

2003-12-12 Thread Shridhar Daithankar
David Shadovitz wrote:

Well, now that I have the plan for my slow-running query, what do I do?  Where 
should I focus my attention?
Briefly looking over the plan and seeing the estimated v/s actual row mismatch,I 
can suggest you following.

1. Vacuum(full) the database. Probably you have already done it.
2. Raise statistics_target to 500 or more and reanalyze the table(s) in question.
3. Set enable_hash_join to false, before running the query and see if it helps.
 HTH

 Shridhar

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


[PERFORM] Query plan - now what?

2003-12-12 Thread David Shadovitz
Well, now that I have the plan for my slow-running query, what do I do?  Where 
should I focus my attention?
Thanks.
-David


Hash Join  (cost=16620.59..22331.88 rows=40133 width=266)  (actual 
time=118773.28..580889.01 rows=57076 loops=1)
->  Hash Join  (cost=16619.49..21628.48 rows=40133 width=249) (actual 
time=118771.29..535709.47 rows=57076 loops=1)
->  Hash Join  (cost=16618.41..20724.39 rows=40133 width=240)  (actual 
time=118768.04..432327.82 rows=57076 loops=1)
->  Hash Join  (cost=16617.34..19920.66 rows=40133 width=223)  
(actual 
time=118764.67..340333.78 rows=57076 loops=l)
->  Hash Join  (cost=16616.14..19217.14 rows=4Ol33 
width=214)  (actual 
time=118761.38..258978.8l row=57076 loops=1)
->  Merge Join  (cost=16615.07..18413.42 
rows=40133 width=205)
(actual 
time=118758.74..187180.55 rows=57076 loops=i)
->  Index Scan using grf_grf_id_idx on 
giraffes  (cost=O.O0..1115.61 
rows=53874 width=8)
   
  (actual 
time=2.37..6802.38 rows=57077 loops=l)
->  Sort  (cost=l66l5.07..16615.07 
rows=18554 width=197)  (actual 
time=118755.11..120261.06 rows=59416 loops=l)
->  Hash Join  
(cost=8126.08..14152.54 rows=18554 width=197)
   (actual 
time=50615.72..l09853.7l rows=16310 loops=1)
->  Hash Join  
(cost=8124.39..12690.30 rows=24907 width=179)
   (actual 
time=50607.36..86868.58 rows=iSBiS loops=1)
->  Hash Join  
(cost=249.26..2375.23 rows=24907 width=131)
   
(actual time=23476.42..35107.80 rows=16310 loops=l)
-> 
Nested Loop  (cost=248.2l..1938.31 rows=24907 width=118)
   
 (actual time=23474.70..28155.13 rows=16310 loops=1)
   
 ->  Seq Scan on zebras  (cost=0.00..l.0l rows=l width=14)
   
 (actual time=O.64..0.72 rows=1 ioops=1)
   
 ->  Materialize  (cost=1688.23..l688.23 rows=24907 width=104)
   
  (actual time=23473.77..23834.26 rows=16310 loops=l)
   
 ->  Hash Join  (cost=248.21..1688.23 rows=24907 width=lO4)
   
   (actual time=1199.26..23059.92 rows=16310 loops=l)
   
 ->  Seq Scan on frogs  (cost=0.00..755.07 rows=24907 width=83)
   
(actual time=0.53..4629.58 rows=25702 
loops=l)
   
 ->  Hash  (cost=225.57..225.57 rows=9057 width=21)
   
   (actual time=1198.0l..1198.01 rows=0 loops=1)
   
 ->  Seq Scan on tigers  (cost=0.00..225.57 rows=9057 
width=21)
   
 (actual time=0.39..892.67 
rows=9927 
loops=1)
->  
Hash  (cost=l.O4..1.-4 rows=4 width=13)  (actual time=l.07..1.07 
rows=0 loops=1)
   
 ->  Seq Scan on deers  (cost=0.0O..1.04 rows=4 width=13)
   
(actual time=0.64..0.95 rows=4 loops=1)

Re: [PERFORM] Measuring execution time for sql called from PL/pgSQL

2003-12-12 Thread Tomasz Myrta
Dnia 2003-12-12 02:17, Użytkownik Aram Kananov napisał:
select localtimestamp into v;
raise notice ''Timestamp: %'', v;
Don't use localtimestamp, now() neither any transaction based time 
function. They all return the same value among whole transaction. The 
only time function, which can be used for performance tests is timeofday().

You can read more about time functions in manual.

Regards,
Tomasz Myrta


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster