[PERFORM] Problems with + 1 million record table

2007-10-05 Thread Cláudia Macedo Amorim
I'm new in PostGreSQL and I need some help.
I have a table with ~2 million records. Queries in this table are too slow and 
some are not completed.I think it must be a simple question to solve but, I'm 
trying without success. I'm worried because next week I will need to work with 
tables with ~100 million records.I'm using:O.S.: Windows XP;PostgreSQL 
8.2;Index type: btree.I have 2 GB of RAM.
POSTGRESQL XXX.LOG:

2007-10-05 09:01:42%SELECT LOG:  could not send data to client: Unknown 
winsock error 10061
2007-10-05 09:03:03%idle LOG:  could not receive data from client: Unknown 
winsock error 10061
2007-10-05 09:03:03%idle LOG:  unexpected EOF on client connection


PSQLODBC.LOG:

[13236.470] 

[13236.470] hdbc=02DE3008, stmt=02C7B1A8, result=02C791D0
[13236.470] prepare=0, internal=0
[13236.470] bindings=32090580, bindings_allocated=20
[13236.470] parameters=, parameters_allocated=0
[13236.470] statement_type=0, statement='select 

a_teste_nestle.CODCLI,

a_teste_nestle.CODFAB,

a_teste_nestle.CODFAMILIANESTLE,

a_teste_nestle.CODFILIAL,

a_teste_nestle.CODGRUPONESTLE,

a_teste_nestle.CODSUBGRUPONESTLE,

a_teste_nestle.CONDVENDA,

a_teste_nestle.DATA,

a_teste_nestle.DESCRICAO,

a_teste_nestle.PESO,

a_teste_nestle.PRACA,

a_teste_nestle.PUNIT,

a_teste_nestle.PVENDA,

a_teste_nestle.QT,

a_teste_nestle.QTITVENDIDOS,

a_teste_nestle.QTPESOPREV,

a_teste_nestle.QTVENDAPREV,

a_teste_nestle.SUPERVISOR,

a_teste_nestle.VENDEDOR,

a_teste_nestle.VLVENDAPREV

from a_teste_nestle 

 

'
[13236.486] stmt_with_params='select 
a_teste_nestle.CODCLI,
a_teste_nestle.CODFAB,
a_teste_nestle.CODFAMILIANESTLE,
a_teste_nestle.CODFILIAL,
a_teste_nestle.CODGRUPONESTLE,
a_teste_nestle.CODSUBGRUPONESTLE,
a_teste_nestle.CONDVENDA,
a_teste_nestle.DATA,
a_teste_nestle.DESCRICAO,
a_teste_nestle.PESO,
a_teste_nestle.PRACA,
a_teste_nestle.PUNIT,
a_teste_nestle.PVENDA,
a_teste_nestle.QT,
a_teste_nestle.QTITVENDIDOS,
a_teste_nestle.QTPESOPREV,
a_teste_nestle.QTVENDAPREV,
a_teste_nestle.SUPERVISOR,
a_teste_nestle.VENDEDOR,
a_teste_nestle.VLVENDAPREV
from a_teste_nestle 

'
[13236.486] data_at_exec=-1, current_exec_param=-1, put_data=0
[13236.501] currTuple=-1, current_col=-1, lobj_fd=-1
[13236.501] maxRows=0, rowset_size=1, keyset_size=0, 
cursor_type=0, scroll_concurrency=1
[13236.501] cursor_name='SQL_CUR02C7B1A8'
[13236.501] QResult Info 
---
[13236.501] fields=02C7C9B8, backend_tuples=, 
tupleField=0, conn=02DE3008
[13236.501] fetch_count=0, num_total_rows=819200, 
num_fields=20, cursor='(NULL)'
[13236.501] message='Out of memory while reading tuples.', 
command='(NULL)', notice='(NULL)'
[13236.501] status=7, inTuples=1
[13236.501]CONN ERROR: func=SC_execute, desc='(null)', errnum=109, errmsg='Out 
of memory while reading tuples.'
[13236.517]

[13236.517]henv=02C727B8, conn=02DE3008, status=1, num_stmts=16
[13236.517]sock=02DD3120, stmts=02DD8EE8, lobj_type=17288
[13236.517] Socket Info 
---
[13236.517]socket=512, reverse=0, errornumber=0, errormsg='(NULL)'
[13236.517]buffer_in=46642688, buffer_out=46633712
[13236.517]buffer_filled_in=4096, buffer_filled_out=0, 
buffer_read_in=3426
[63860.095]conn=02DE3008, PGAPI_Disconnect
[63880.251]conn=02C73A78, PGAPI_Disconnect








POSTGRESQL.CONF:



#---
# RESOURCE USAGE (except WAL)
#---

# - Memory -

shared_buffers = 512MB   # min 128kB or max_connections*16kB
 # (change requires restart)
temp_buffers = 32MB   # min 800kB
#max_prepared_transactions = 5  # can be 0 or more
 # (change requires restart)
# Note: increasing max_prepared_transactions costs ~600 bytes of shared memory
# per transaction slot, plus lock space (see max_locks_per_transaction).
work_mem = 256MB# min 64kB
maintenance_work_mem = 128MB  # min 1MB
#max_stack_depth = 2MB   # min 100kB

# - Free Space Map -

max_fsm_pages = 409600  # min max_fsm_relations*16, 6 bytes each
 # (change requires restart)
#max_fsm_relations = 1000  # min 100, ~70 bytes each
 # (change requires restart)




The table structure is:

CREATE TABLE public.a_teste_nestle (
  DATA TIMESTAMP WITH TIME ZONE, 
  CODCLI DOUBLE PRECISION, 
  VENDEDOR DOUBLE PRECISION, 
  SUPERVISOR DOUBLE PRECISION, 
  CODFILIAL VARCHAR(2), 
  PRACA DOUBLE PRECISION, 
  CONDVENDA DOUBLE PRECISION, 
  QTITVENDIDOS DOUBLE PRECISION, 
  PVENDA DOUBLE 

Re: [PERFORM] Problems with + 1 million record table

2007-10-05 Thread Scott Marlowe
On 10/5/07, Cláudia Macedo Amorim [EMAIL PROTECTED] wrote:

 I'm new in PostGreSQL and I need some help.

 I have a table with ~2 million records. Queries in this table are too slow
 and some are not completed.
 I think it must be a simple question to solve but, I'm trying without
 success. I'm worried because next week I will need to work with tables
 with ~100 million records.

 I'm using:
 O.S.: Windows XP;
 PostgreSQL 8.2;
 Index type: btree.

 I have 2 GB of RAM.

 POSTGRESQL XXX.LOG:

 2007-10-05 09:01:42%SELECT LOG:  could not send data to client: Unknown
 winsock error 10061
 2007-10-05 09:03:03%idle LOG:  could not receive data from client: Unknown
 winsock error 10061
 2007-10-05 09:03:03%idle LOG:  unexpected EOF on client connection

This looks like your client is dying on receiving too much data.  You
can either try to fix the client to handle more data, which isn't the
best way to proceed, or you can retrieve your data with a cursor a
chunk at a time.

 PSQLODBC.LOG:
 [13236.501]CONN ERROR: func=SC_execute, desc='(null)', errnum=109,
 errmsg='Out of memory while reading tuples.'

Assuming this is the client side error, yes, you're simply reading too
many rows at once.

 POSTGRESQL.CONF:
 shared_buffers = 512MB   # min 128kB or max_connections*16kB

Reasonable for  a machine with 2 G ram.

 work_mem = 256MB# min 64kB

If and only if you have one or two users, this is ok.  Otherwise it's
a bit high.

Take a look at cursors, here's the declare ref page:

http://www.postgresql.org/docs/8.2/static/sql-declare.html

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Problems with + 1 million record table

2007-10-05 Thread Arjen van der Meijden

On 5-10-2007 16:34 Cláudia Macedo Amorim wrote:

[13236.470] statement_type=0, statement='select
a_teste_nestle.CODCLI,
a_teste_nestle.CODFAB,
a_teste_nestle.CODFAMILIANESTLE,
a_teste_nestle.CODFILIAL,
a_teste_nestle.CODGRUPONESTLE,
a_teste_nestle.CODSUBGRUPONESTLE,
a_teste_nestle.CONDVENDA,
a_teste_nestle.DATA,
a_teste_nestle.DESCRICAO,
a_teste_nestle.PESO,
a_teste_nestle.PRACA,
a_teste_nestle.PUNIT,
a_teste_nestle.PVENDA,
a_teste_nestle.QT,
a_teste_nestle.QTITVENDIDOS,
a_teste_nestle.QTPESOPREV,
a_teste_nestle.QTVENDAPREV,
a_teste_nestle.SUPERVISOR,
a_teste_nestle.VENDEDOR,
a_teste_nestle.VLVENDAPREV
from a_teste_nestle
 
'


Is that the entire query? Are you sure you really want to select the 
entire table without having a where-clause? That's normally not a very 
scalable aproach...


Best regards,

Arjen

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] Problems with + 1 million record table

2007-10-05 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Cláudia Macedo Amorim wrote:
 I'm new in PostGreSQL and I need some help.
 I have a table with ~2 million records. Queries in this table are too slow 
 and some are not completed.I think it must be a simple question to solve but, 
 I'm trying without success. I'm worried because next week I will need to work 
 with tables with ~100 million records.I'm using:O.S.: Windows XP;PostgreSQL 
 8.2;Index type: btree.I have 2 GB of RAM.
 POSTGRESQL XXX.LOG:
 
 2007-10-05 09:01:42%SELECT LOG:  could not send data to client: Unknown 
 winsock error 10061
 2007-10-05 09:03:03%idle LOG:  could not receive data from client: Unknown 
 winsock error 10061
 2007-10-05 09:03:03%idle LOG:  unexpected EOF on client connection


You are not providing a where clause which means you are scanning all 2
million records. If you need to do that, do it in a cursor.


Joshua D. Drake



 
 
 PSQLODBC.LOG:
 
 [13236.470] 
 
 [13236.470] hdbc=02DE3008, stmt=02C7B1A8, result=02C791D0
 [13236.470] prepare=0, internal=0
 [13236.470] bindings=32090580, bindings_allocated=20
 [13236.470] parameters=, parameters_allocated=0
 [13236.470] statement_type=0, statement='select 
 
 a_teste_nestle.CODCLI,
 
 a_teste_nestle.CODFAB,
 
 a_teste_nestle.CODFAMILIANESTLE,
 
 a_teste_nestle.CODFILIAL,
 
 a_teste_nestle.CODGRUPONESTLE,
 
 a_teste_nestle.CODSUBGRUPONESTLE,
 
 a_teste_nestle.CONDVENDA,
 
 a_teste_nestle.DATA,
 
 a_teste_nestle.DESCRICAO,
 
 a_teste_nestle.PESO,
 
 a_teste_nestle.PRACA,
 
 a_teste_nestle.PUNIT,
 
 a_teste_nestle.PVENDA,
 
 a_teste_nestle.QT,
 
 a_teste_nestle.QTITVENDIDOS,
 
 a_teste_nestle.QTPESOPREV,
 
 a_teste_nestle.QTVENDAPREV,
 
 a_teste_nestle.SUPERVISOR,
 
 a_teste_nestle.VENDEDOR,
 
 a_teste_nestle.VLVENDAPREV
 
 from a_teste_nestle 
 
  
 
 '
 [13236.486] stmt_with_params='select 
 a_teste_nestle.CODCLI,
 a_teste_nestle.CODFAB,
 a_teste_nestle.CODFAMILIANESTLE,
 a_teste_nestle.CODFILIAL,
 a_teste_nestle.CODGRUPONESTLE,
 a_teste_nestle.CODSUBGRUPONESTLE,
 a_teste_nestle.CONDVENDA,
 a_teste_nestle.DATA,
 a_teste_nestle.DESCRICAO,
 a_teste_nestle.PESO,
 a_teste_nestle.PRACA,
 a_teste_nestle.PUNIT,
 a_teste_nestle.PVENDA,
 a_teste_nestle.QT,
 a_teste_nestle.QTITVENDIDOS,
 a_teste_nestle.QTPESOPREV,
 a_teste_nestle.QTVENDAPREV,
 a_teste_nestle.SUPERVISOR,
 a_teste_nestle.VENDEDOR,
 a_teste_nestle.VLVENDAPREV
 from a_teste_nestle 
 
 '
 [13236.486] data_at_exec=-1, current_exec_param=-1, put_data=0
 [13236.501] currTuple=-1, current_col=-1, lobj_fd=-1
 [13236.501] maxRows=0, rowset_size=1, keyset_size=0, 
 cursor_type=0, scroll_concurrency=1
 [13236.501] cursor_name='SQL_CUR02C7B1A8'
 [13236.501] QResult Info 
 ---
 [13236.501] fields=02C7C9B8, backend_tuples=, 
 tupleField=0, conn=02DE3008
 [13236.501] fetch_count=0, num_total_rows=819200, 
 num_fields=20, cursor='(NULL)'
 [13236.501] message='Out of memory while reading tuples.', 
 command='(NULL)', notice='(NULL)'
 [13236.501] status=7, inTuples=1
 [13236.501]CONN ERROR: func=SC_execute, desc='(null)', errnum=109, 
 errmsg='Out of memory while reading tuples.'
 [13236.517]
 
 [13236.517]henv=02C727B8, conn=02DE3008, status=1, num_stmts=16
 [13236.517]sock=02DD3120, stmts=02DD8EE8, lobj_type=17288
 [13236.517] Socket Info 
 ---
 [13236.517]socket=512, reverse=0, errornumber=0, errormsg='(NULL)'
 [13236.517]buffer_in=46642688, buffer_out=46633712
 [13236.517]buffer_filled_in=4096, buffer_filled_out=0, 
 buffer_read_in=3426
 [63860.095]conn=02DE3008, PGAPI_Disconnect
 [63880.251]conn=02C73A78, PGAPI_Disconnect
 
 
 
 
 
 
 
 
 POSTGRESQL.CONF:
 
 
 
 #---
 # RESOURCE USAGE (except WAL)
 #---
 
 # - Memory -
 
 shared_buffers = 512MB   # min 128kB or max_connections*16kB
  # (change requires restart)
 temp_buffers = 32MB   # min 800kB
 #max_prepared_transactions = 5  # can be 0 or more
  # (change requires restart)
 # Note: increasing max_prepared_transactions costs ~600 bytes of shared memory
 # per transaction slot, plus lock space (see max_locks_per_transaction).
 work_mem = 256MB# min 64kB
 maintenance_work_mem = 128MB  # min 1MB
 #max_stack_depth = 2MB   # min 100kB
 
 # - Free Space Map -
 
 max_fsm_pages = 409600  # min max_fsm_relations*16, 6 bytes each
  # (change requires restart)
 #max_fsm_relations = 1000 

Re: [PERFORM] Problems with + 1 million record table

2007-10-05 Thread Shane Ambler

Joshua D. Drake wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Cláudia Macedo Amorim wrote:

I'm new in PostGreSQL and I need some help.
I have a table with ~2 million records. Queries in this table are too slow and 
some are not completed.I think it must be a simple question to solve but, I'm 
trying without success. I'm worried because next week I will need to work with 
tables with ~100 million records.I'm using:O.S.: Windows XP;PostgreSQL 
8.2;Index type: btree.I have 2 GB of RAM.
POSTGRESQL XXX.LOG:

2007-10-05 09:01:42%SELECT LOG:  could not send data to client: Unknown 
winsock error 10061
2007-10-05 09:03:03%idle LOG:  could not receive data from client: Unknown 
winsock error 10061
2007-10-05 09:03:03%idle LOG:  unexpected EOF on client connection



You are not providing a where clause which means you are scanning all 2
million records. If you need to do that, do it in a cursor.


Joshua D. Drake




I would also add that if you want to use anything other than the data 
column in the where clause you should add an index to those columns as well.




The table structure is:

CREATE TABLE public.a_teste_nestle (
  DATA TIMESTAMP WITH TIME ZONE, 
  CODCLI DOUBLE PRECISION, 
  VENDEDOR DOUBLE PRECISION, 
  SUPERVISOR DOUBLE PRECISION, 
  CODFILIAL VARCHAR(2), 
  PRACA DOUBLE PRECISION, 
  CONDVENDA DOUBLE PRECISION, 
  QTITVENDIDOS DOUBLE PRECISION, 
  PVENDA DOUBLE PRECISION, 
  PESO DOUBLE PRECISION, 
  CODPROD VARCHAR(15), 
  CODFAB VARCHAR(15), 
  DESCRICAO VARCHAR(80), 
  CODGRUPONESTLE DOUBLE PRECISION, 
  CODSUBGRUPONESTLE DOUBLE PRECISION, 
  CODFAMILIANESTLE DOUBLE PRECISION, 
  QTPESOPREV DOUBLE PRECISION, 
  QTVENDAPREV DOUBLE PRECISION, 
  VLVENDAPREV DOUBLE PRECISION, 
  QT DOUBLE PRECISION, 
  PUNIT DOUBLE PRECISION

) WITHOUT OIDS;

CREATE INDEX a_teste_nestle_idx ON public.a_teste_nestle
  USING btree (DATA);


Thanks,



--

Shane Ambler
[EMAIL PROTECTED]

Get Sheeky @ http://Sheeky.Biz

---(end of broadcast)---
TIP 6: explain analyze is your friend