[GENERAL] Question about data corruption issue of 9.1 (pre 9.1.6)

2012-11-05 Thread Denis Gasparin

Hi to all. 


We have a 9.1.5 production database that never crashed and today we upgraded it 
to 9.1.6. 


The question: do we need to follow the vacuum/reindex procedure (as specified 
here http://wiki.postgresql.org/wiki/20120924updaterelease ) even if the 
database never crashed? 


Thank you in advance 


Denis Gasparin 

--- 
Edistar SRL 

[GENERAL] Odd behaviour in update rule

2010-07-07 Thread Denis Gasparin
Hi. 

I have an odd behaviour on an update rule in postgresql 8.2 and i'd like to 
know if the behaviour is as expected or not. 

The following sql statements prepare the table, view and rule. 

create table main ( 
id integer not null primary key, 
value integer not null 
); 

create view view_main (id,value) as select * from main; 

create rule update_view_main as on update to view_main do instead update main 
set value=new.value where id = old.id; 

insert into main values(1,1); 

In table main we have only one record with id=1 and value=1. 

Now we suppose that two clients connect simultaneously to the database and 
execute the following statements in parallel (CLIENT A first then CLIENT B). 



CLIENT A: begin; 
CLIENT B: begin; 
CLIENT A: update view_main set value=value+1 where id=1; 
CLIENT B: update view_main set value=value+1 where id=1; --waits for client A 
to commit changes 
CLIENT A: commit; 
CLIENT B: commit; 

CLIENT A: select * from view_main; 
--expected value = 2 
--returned value = 2 

CLIENT A: select * from view_main; 
--expected value = 2 
--returned value = 3 


I would expect that the two updates behaves exactly as a direct update on 
main... (returned value=3) but this is not the case... 
Is it the correct behaviour? 

Thank you, 
Denis 


[GENERAL] Free Cache Memory (Linux) and Postgresql

2008-09-30 Thread Denis Gasparin
Hi.

I'm evaluating to issue the drop_caches kernel command (echo 3 
/proc/sys/vm/drop_caches) in order to free unused pagecache, directory
entries and inodes.

I'm thinking to schedule the command during low load moments after
forcing a sync command.

I wonder if this can cause pgsql problems of any kind. Any idea?

Thank you in advance,
 
Denis Gasparin

Edistar SRL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Free Cache Memory (Linux) and Postgresql

2008-09-30 Thread Denis Gasparin
Tom Lane ha scritto:
 Denis Gasparin [EMAIL PROTECTED] writes:
   
 I'm evaluating to issue the drop_caches kernel command (echo 3 
 /proc/sys/vm/drop_caches) in order to free unused pagecache, directory
 entries and inodes.
 

 Why in the world would you think that's a good idea?

   regards, tom lane

   
We see cached memory growing on constant base, even if there are no
connections to database.

We have some tables that are truncated and reloaded with updated data on
regular basis (3,4 days).

It seems like postgres or the operating system (linux) is keeping in
cache that old data even if it has been deleted.

We're searching a way to free that memory without shutting down pgsql.

Thank you for your help,
Denis

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] IN query operator and NULL values

2008-05-16 Thread Denis Gasparin

Hi all.

I have a problem with the IN operator in PostgreSQL 8.2.7. Here it is an 
example that reproduce the problem:


test=# create table test(a integer ,b integer);
CREATE TABLE
test=# insert into test values(1,1);
INSERT 6838415 1
test=# insert into test values(2,2);
INSERT 6838416 1
test=# insert into test values(3,null);
INSERT 6838417 1
test=# select * from test ;
a | b
---+---
1 | 1
2 | 2
3 |
(3 rows)

test=# select * from test where b in(1,null);
a | b
---+---
1 | 1

In the last resultset, i was expecting two records the one with b = 1 
and the one with b = null.

PostgreSQL instead returns only the value with not null values.

I tested the example also in PostgreSQL 8.1 and it works correctly (two 
records).


So the question is: what has changed from 8.1 to 8.2?

Thank you in advance for your help,

Denis

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Connection Pooling directly on Postgres Server

2007-09-08 Thread Denis Gasparin

 This has certainly been discussed before.

 IIRC the real argument against that was, that fork() isn't the most
 expensive thing to do anymore. And Postgres does lots of other stuff
 after accept(), namely connecting to a certain database,
 authenticating the user, etc..
Ok. I knew that. I made the question because it seems that, for example,
Oracle in release 11g is moving to a similar solution in order to solve
connection pooling problems.

For example look at the following link:

http://pbarut.blogspot.com/2007/08/oracle-11g-drcp-database-resident.html

Denis


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Connection Pooling directly on Postgres Server

2007-09-07 Thread Denis Gasparin
I'm looking for connection pooling solutions for our php/apache server.

I already checked pgpool and pgbouncer but during the tests, I had the
following (mad) idea...

Why not to implement a connection pooling server side as apache for
example does?

I try to explain my idea...

The postgres server maintains a number of connections always alive (as
apache for example does)
even if a client disconnects.

The following parameters could be used to tune the number of connections
kept alive server side:

StartServers: number of postgres already active connections at server start
MinSpareServers: If there are fewer than MinSpareServers, it creates a
new spare (connection)
MaxSpareServers: If there are more than MaxSpareServers, some of the
spares (connections) die off.

The parameters has been taken directly from an apache httpd.conf sample...

Could it be possible to implement a similar solution on postgres?

What to do you think about this?

Thank you,
Denis

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Question regarding autovacuum in 8.1

2007-08-30 Thread Denis Gasparin
Alvaro Herrera ha scritto:
 However i have no idea of what tables the autovacuum daemon is
 processing because there aren't autovacuum info columns on
 pg_stat_all_tables (as there are for 8.2.x).
 

 For that, you need to change log_min_messages to debug2.

 Keep track of the PID of autovacuum from the processing database
 message and see if you can spot an ERROR message from it.

   
Ok. Thank you.
Another question/idea: why don't put messages about what tables got
vacuumed by the autovacuum daemon as normal log messages (instead of
debug2)?
I think it could be useful because in this way you can also know what
tables are used more often then other...
If i'm not wrong, the old autovacuum process in 7.4 and 8.0 did that...

Denis



[GENERAL] Autovacuum not vacuuming pg_largeobject

2007-08-30 Thread Denis Gasparin
I'm a bit concerned about the autovacuum daemon.

Today I runned a vacuum full during a normal maintainance task and I
noticed that the size of pg_largeobject
decreased from 14GB to 4GB...

Every night we have a procedure that deletes large object no more
referenced using the vacuumlo program.

This program issues delete commands to the pg_largeobject table in order
to erase the rows of the los no more referenced.

Autovacuum is up and running... but now i'm thinking it doesn't examine
system tables such as pg_largeobject...

Am I wrong?

Thank you,
Denis

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Autovacuum not vacuuming pg_largeobject

2007-08-30 Thread Denis Gasparin

 Yeah, you're wrong.  The difference is that plain vacuum does not try
 very hard to reduce the length of a table file --- it just frees up
 space within the file for reuse.  vacuum full will actually move things
 from the end of the file to free space nearer the head of the file,
 so that it can shorten the file.

 What I suspect the above observations really prove is you don't have
 max_fsm_pages set high enough, and so pg_largeobject was bloating because
 the free space was being forgotten instead of reused.

   
I tried to issue the vacuum command on one of my database and i got the
following log messages:

LOG:  max_fsm_relations(1000) equals the number of relations checked
HINT:  You have at least 1000 relations.  Consider increasing the
configuration parameter

I suspect I must increase max_fsm_relations.
The value of max_fsm_pages is 2 (the default value).
I suspect I must change this to a higher value... but how high should
this value be?

Thank you in advance,
Denis






---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Question regarding autovacuum in 8.1

2007-08-29 Thread Denis Gasparin
How is it possibile to check if autovacuum is running in 8.1.x?

Show Autovacuum gives me on and also i see evidence in logs where
autovacuum writes LOG:  autovacuum: processing database .

However i have no idea of what tables the autovacuum daemon is
processing because there aren't autovacuum info columns on
pg_stat_all_tables (as there are for 8.2.x).

Also I'm asking this because the size of the pg_clog is 200M and I am
worried about possible transaction  ID wraparound failures...

Thank you,
Denis

Tom Lane ha scritto:
 Karl Denninger [EMAIL PROTECTED] writes:
   
 But... .shouldn't autovacuum prevent this?  Is there some way to look in 
 a log somewhere and see if and when the autovacuum is being run - and on 
 what?
 

 There's no log messages (at the default log verbosity anyway).  But you
 could look into the pg_stat views for the last vacuum time for each table.

   regards, tom lane

 ---(end of broadcast)---
 TIP 6: explain analyze is your friend

   


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Question regarding autovacuum in 8.1

2007-08-29 Thread Denis Gasparin
How is it possibile to check if autovacuum is running in 8.1.x?

Show Autovacuum gives me on and also i see evidence in logs
where,autovacuum writes LOG:  autovacuum: processing database .

However i have no idea of what tables the autovacuum daemon is
processing because there aren't autovacuum info columns on
pg_stat_all_tables (as there are for 8.2.x).

Also I'm asking this because the size of the pg_clog is 200M and I am
worried about possible transaction  ID wraparound failures

Thank you,
Denis

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] Postgresql over a SAN

2007-07-26 Thread Denis Gasparin
We're evaluating to install a SAN (Storage Area Network) and to use it
as storage area for our Postgresql server.

Did anybody already make this? Problems, performance issues, tips?

The db server is mainly used as backend to several heavy loaded web servers.

The version of Postgresql is 8.1 and OS is linux with kernel 2.6.
The server will be connected to the SAN using a 4Gb fibre channel link.

Thank you in advance,
Denis

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Strange behaviour with Xfs filesystem and unix_socket_directory

2007-07-24 Thread Denis Gasparin
We configured the unix_socket_directory in postgresql.conf to point to a
folder in a xfs filesystem.

The problem is that pg_ctl tries to start the daemon but after 1 minute
it gives up with the following message: could not start postmaster.

The strange thing is that the postmaster is there, up and alive...

Moreover this does not happen with other filesystems...

Any idea?

We're running 8.1.9 on Linux Ubuntu with kernel 2.6.17-11 and xfs progs
2.8.10-1

Thank you in advance for your help,

Denis


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Strange behaviour with Xfs filesystem and unix_socket_directory

2007-07-24 Thread Denis Gasparin

Setting PGHOST solved the problem!

It was not clear to me that pg_ctl is actually a client and it doesn't
read the configuration from postgresql.conf

Thank you,
Denis

Tom Lane ha scritto:
 Denis Gasparin [EMAIL PROTECTED] writes:
   
 We configured the unix_socket_directory in postgresql.conf to point to a
 folder in a xfs filesystem.
 

 I don't think pg_ctl can deal with nondefault settings of
 unix_socket_directory, because it does not make any attempt to parse the
 postmaster's postgresql.conf file.  So it's still trying to contact the
 postmaster in /tmp, and of course not seeing any evidence the postmaster
 is up.

 You'll find that most other clients fail similarly.

 You might be able to make it work if you set PGHOST to the socket
 directory before running pg_ctl (and other clients).  But I think
 if you really want this, you would be best advised to alter the default
 socket directory at build time, instead (see DEFAULT_PGSOCKET_DIR in
 pg_config_manual.h).

   regards, tom lane

 ---(end of broadcast)---
 TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly

   



Re: [GENERAL] AutoVacuum Behaviour Question

2007-06-29 Thread Denis Gasparin
Martijn van Oosterhout ha scritto:
 On Thu, Jun 28, 2007 at 11:12:19AM +0100, Bruce McAlister wrote:
   
 I just want to verify that I understand you correctly here, do you mean
 that the temporary table is created by specific sql, for example, create
 temp table, then perform some actions on that temp table, then, either
 you remove the temp table, or, if you close the session/connection the
 postmaster will clean up the temp table? What happens if you're using
 connection pools, i mean are those sessions deemed closed after the
 queries complete, when the pool connections are persistent.
 

 Yes, the temp table is private to the session and will be removed once
 the session closes, if not sooner. As for connection pools, IIRC there
 is a RESET SESSION command which should also get rid of the temporary
 tables.
   
RESET SESSION command is available only in 8.2 branch, isn't it?
I tried to issue the command in a 8.1 server and the answer was: ERROR: 
unrecognized configuration parameter session

Is there available a patch for the 8.1 version of postgresql?

Thank you,
Denis


[GENERAL] Prepared queries vs Non-prepared

2007-03-28 Thread Denis Gasparin
Hi!
I am testing the PHP PDO library versus the old style PHP postgres
functions.

I noted that PDO library declare and prepare every statement. I mean:

$s = $db-query(select * from test where field=1);

is equivalent to

$s = $db-prepare(select * from test where field=?);
$s-execute(array('1'));

I logged the queries sent to the database and i saw that they are the
same (apart obviously the parameter):

PREPARE pdo_pgsql_stmt_b7a71234 AS select * from test where field=1
BIND
EXECUTE unnamed  [PREPARE:  select * from test where field=1]

PREPARE pdo_pgsql_stmt_b7a713b0 AS select * from test where field=$1
BIND
EXECUTE unnamed  [PREPARE:  select * from test where field=$1]


Speaking about postgresql performance...
would not it be more efficient executing directly the query in the first
case ($db-query) than
preparing a statement without parameters and then executing it?

Thank you in advance,
Denis


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Query that does not use indexes

2007-03-27 Thread Denis Gasparin
I have a query that performs a multiple join between four tables and
that doesn't use the defined indexes.

If I set enable_seqscan to off, the query obviously uses the indexes and
it is considerable faster than normal planned execution with
enable_seqscan=true.

Can you give me a reason why Postgresql is using seqscan when it should not?

I tryed also to vacuum analyze and reindex all the database but it
didn't change anything.

Thank you in advance,
Denis

 Database and query infos 

The database is made of four tables. Here it is an extract of the
definitition:

table order (70 records)
order_id serial not null primary key,
order_date timestamp not null

table order_part (233 records)
part_id serial not null primary key,
order_id integer references order(order_id)

table component (35 records)
serial_number serial not null primary key,
part_id integer not null references order_part(part_id)

table component_part (5 records)
serial_number integer not null references component(serial_number),
component_part_serial serial unique

Index component_part_1 on serial_number of component_part
Index component_part_id on part_id of component


Here it is the query:

select to_char(ORDER.ORDER_DATE::date,'DD-MM-') as ORDER_DATE ,
count(component_part_serial) as COMPONENTS_PARTS_WITH_SERIAL,
count(*) as TOTAL_COMPONENTS_PARTS
from ORDER inner join ORDER_PART using(ORDER_ID)
inner join COMPONENT using(PART_ID)
inner join COMPONENT_PART using(SERIAL_NUMBER)
where ORDER.ORDER_DATE::date between  '2007-03-01' and '2007-03-27'
group by ORDER.ORDER_DATE::date order by ORDER.ORDER_DATE::date


Here it is the explain analyze with seqscan to on:

 Sort  (cost=12697.04..12697.04 rows=1 width=24) (actual
time=1929.983..1929.991 rows=7 loops=1)
   Sort Key: (order.order_date)::date
   -  HashAggregate  (cost=12697.00..12697.03 rows=1 width=24) (actual
time=1929.898..1929.949 rows=7 loops=1)
 -  Hash Join  (cost=9462.76..12692.00 rows=667 width=24)
(actual time=1355.807..1823.750 rows=50125 loops=1)
   Hash Cond: (outer.serial_number = inner.serial_number)
   -  Seq Scan on component_part  (cost=0.00..2463.76
rows=50476 width=16) (actual time=0.011..93.194 rows=50476 loops=1)
   -  Hash  (cost=9451.14..9451.14 rows=4649 width=24)
(actual time=1333.016..1333.016 rows=50145 loops=1)
 -  Hash Join  (cost=34.84..9451.14 rows=4649
width=24) (actual time=1.350..1202.466 rows=50145 loops=1)
   Hash Cond: (outer.part_id = inner.part_id)
   -  Seq Scan on component 
(cost=0.00..7610.87 rows=351787 width=20) (actual time=0.004..603.470
rows=351787 loops=1)
   -  Hash  (cost=34.84..34.84 rows=3 width=12)
(actual time=1.313..1.313 rows=44 loops=1)
 -  Hash Join  (cost=7.40..34.84 rows=3
width=12) (actual time=0.943..1.221 rows=44 loops=1)
   Hash Cond: (outer.order_id =
inner.order_id)
   -  Seq Scan on order_part 
(cost=0.00..26.27 rows=227 width=8) (actual time=0.005..0.465 rows=233
loops=1)
   -  Hash  (cost=7.40..7.40 rows=1
width=12) (actual time=0.301..0.301 rows=28 loops=1)
 -  Seq Scan on order 
(cost=0.00..7.40 rows=1 width=12) (actual time=0.108..0.226 rows=28 loops=1)
   Filter:
(((order_date)::date = '2007-03-01'::date) AND ((order_date)::date =
'2007-03-27'::date))
 Total runtime: 1930.309 ms

Here it is the explain analyze with seqscan to off:

 Sort  (cost=19949.51..19949.51 rows=1 width=24) (actual
time=1165.948..1165.955 rows=7 loops=1)
   Sort Key: (order.order_date)::date
   -  HashAggregate  (cost=19949.47..19949.50 rows=1 width=24) (actual
time=1165.865..1165.916 rows=7 loops=1)
 -  Merge Join  (cost=15205.84..19944.47 rows=667 width=24)
(actual time=541.778..1051.830 rows=50125 loops=1)
   Merge Cond: (outer.serial_number = inner.serial_number)
   -  Sort  (cost=15205.84..15217.47 rows=4649 width=24)
(actual time=540.331..630.632 rows=50145 loops=1)
 Sort Key: component.serial_number
 -  Nested Loop  (cost=636.36..14922.66 rows=4649
width=24) (actual time=0.896..277.778 rows=50145 loops=1)
   -  Nested Loop  (cost=0.00..72.73 rows=3
width=12) (actual time=0.861..24.820 rows=44 loops=1)
 Join Filter: (outer.order_id =
inner.order_id)
 -  Index Scan using order_pkey on
order  (cost=0.00..27.47 rows=1 width=12) (actual time=0.142..0.307
rows=28 loops=1)
   Filter: (((order_date)::date =
'2007-03-01'::date) AND ((order_date)::date = '2007-03-27'::date))
 -  Index Scan using 

[GENERAL] Getting Text data as C string

2007-03-12 Thread Denis Gasparin
Hi.

I wrote a simple c stored procedure that accepts only one text parameter
that i want to convert to a c string.
The problem is that i obtain the C string correctly but with spurious
characters at the end... I use these calls to obtain the c string:

text *t_myfield = PG_GETARG_TEXT_P(0);
char *str_myfield = VARDATA(t_myfield);

Here it is the source:

#include postgres.h /* general Postgres declarations */

#include fmgr.h /* for argument/result macros */
#include executor/executor.h /* for GetAttributeByName() */
#include math.h
#include libpq/pqformat.h /* needed for send/recv functions */


Datum test_text(PG_FUNCTION_ARGS);
PG_FUNCTION_INFO_V1(test_text);

Datum test_text(PG_FUNCTION_ARGS) {
text *t_myfield = PG_GETARG_TEXT_P(0);
char *str_myfield = VARDATA(t_myfield);

elog(NOTICE,(%s),str_myfield);

PG_RETURN_TEXT_P(t_myfield);
}

This is the sql to create the function:

CREATE or replace FUNCTION test_text(text)
RETURNS text
AS '/your_path_to_sp.so/sp.so'
LANGUAGE C IMMUTABLE STRICT;

The output is (note the trailing spurious character in the NOTICE line):

NOTICE: (test12345678?)
test_text
--
test12345678

I expected:
NOTICE: (test12345678)
test_text
--
test12345678


Where is the error?

Thank you,
Denis


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Getting Text data as C string

2007-03-12 Thread Denis Gasparin
Peter Eisentraut ha scritto:
 Am Montag, 12. März 2007 12:47 schrieb Denis Gasparin:
   
 I wrote a simple c stored procedure that accepts only one text parameter
 that i want to convert to a c string.
 The problem is that i obtain the C string correctly but with spurious
 characters at the end... I use these calls to obtain the c string:
 

 The data in a text datum is not null terminated.  You need to call the 
 function textout() to convert.  Grep the source code for examples of invoking 
 it.

   
I found these defines into the contrib section:

#define GET_STR(textp) DatumGetCString(DirectFunctionCall1(textout,
PointerGetDatum(textp)))
#define GET_TEXT(cstrp) DatumGetTextP(DirectFunctionCall1(textin,
CStringGetDatum(cstrp)))

So i can safely use them to obtain a string pointer from a text pointer
and viceversa.

I tried and all seemed to work ok.
Thank you,
Denis


[GENERAL] age(datfrozenxid) negative for template0. Is this normal?

2006-05-02 Thread Denis Gasparin
Today I executed the following query as stated into the Administrator 
Guide to check the XID wraparound problem:


SELECT datname, age(datfrozenxid) FROM pg_database;

All the database report an age of 1 billion except for the template0 
database. This is an extract of the result of the query:


datname |age
-+
template1   | 1073794149
template0   | -686262347

Is it normal that the age for template0 is negative?

The version of the backend is 7.4.6 with pg_autovacuum running.

Please let me know as soon as possible if this is a problem or not...

Thank you,
Denis

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Size comparison between a Composite type and an

2006-02-28 Thread Denis Gasparin

Hi Doug.

I considered also the numeric type. In that case if the number is of 32 
digits the storage size is of 2*8 + 8 = 24 bytes.
If i store it using a composite data type of two bigints the size is 2*8 
+ composite data structure overhead bytes.


If the composite data type has 4 bytes overhead, I save 4 bytes for each 
number... that is important because I must store many many numbers.


Performance speaking, the numeric type can be indexed?
In the case of composite data types, I must create an operator class for 
indexing the fields of that type...

What is the performance gap between indexed numeric and composite?

Thank you,
Denis


Douglas McNaught wrote:

[EMAIL PROTECTED] writes:

  

I need to store very large integers (more of 30 digits).



Er,

What's wrong with the NUMERIC type?  That can go up to hundreds of
digits.

-Doug

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match

  



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Smallint - Integer Casting Problems in Plpgsql functions

2004-03-17 Thread Denis Gasparin
Hi Richard.
Thank you for your reply. I rewrote the store procedure to accept 
integer instead of smallint.

What i don't understand is  why the casting is working in 7.2.3. What 
has been changed from that?

Thank you,

--
Doct. Eng. Denis Gasparin: [EMAIL PROTECTED]
---
Programmer  System Administrator - Edistar srl


Richard Huxton wrote:

On Wednesday 17 March 2004 15:54, Denis Gasparin wrote:
 

Hi, i'm upgrading our database from postgresql 7.2.3 to 7.4.2.
The import went fine but i have some casting problems with plpgsql
functions.
I've create a test function with this code:

create function test(varchar,smallint,integer) returns integer as '
   

 

select test('aaa',1,1);
gives me the following error:
ERROR:  function test(unknown, integer, integer) does not exist
   

Easiest solution is to just define the function as accepting integer rather 
than smallint.
I believe the typeing will be smarter in 7.5 but don't know if it will affect 
this situation.

 



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] Bug in pg_dumpall

2003-11-21 Thread Denis Gasparin
I created a user with no superuser privileges:

CREATE USER aaa PASSWORD 'bbb' NOCREATEDB NOCREATEUSER;

Then i created an authorization schema:

CREATE SCHEMA AUTHORIZATION aaa;

All worked fine.

The problem is in the pg_dumpall file.

Looking into the generated sql, i find that the schema is created with 
the command:

CREATE USER aaa PASSWORD 'bbb' NOCREATEDB NOCREATEUSER;

SET SESSION AUTHORIZATION aaa;
CREATE SCHEMA aaa;
This fails giving me the following error:
ERROR:  : permission denied
Any ideas?

I'm using Postgresql 7.3.4.

--
Ing. Denis Gasparin: [EMAIL PROTECTED]
---
Programmer  System Administrator - Edistar srl
Via dell'artigianato, 1
31050 Vedelago TV
Telefono: 0423-733209
Fax: 0423-733733
Internet: www.edistar.com
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] Bug in pg_dumpall

2003-11-21 Thread Denis Gasparin
I'm restoring it as the superuser.

You can try to execute the commands directly from psql.

Denis

Doug McNaught wrote:
Denis Gasparin [EMAIL PROTECTED] writes:


Looking into the generated sql, i find that the schema is created with 
the command:

CREATE USER aaa PASSWORD 'bbb' NOCREATEDB NOCREATEUSER;

SET SESSION AUTHORIZATION aaa;
CREATE SCHEMA aaa;
This fails giving me the following error:
ERROR:  : permission denied
Any ideas?


Are you doing the restore as a regular user?  Output of pg_dumpall is
designed to be restored by the superuser.
-Doug

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])




---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Python interface memory leak?

2001-10-17 Thread Denis Gasparin

At 16/10/01 14.00, you wrote:
Does the python interface leak memory? We are seeing the process grow
with basically every select. Any suggestions on what's going on? There
are no cycles, and a trivial program (basically a loop around a select)
demonstrates the problem.

This is 7.1.2 on RH7.[01].

 Stephen

The python interface stores query datas in memory. If you don't clean such 
datas, the used memory will grow up...

If this is not the case, try the PoPy postgres driver for Python 
(http://popy.sourgeforge.net or 
http://freshmeat.net/redir/popy/8258/url_tgz/PoPy-2.0.7.tar.gz ).

Regards,



Doct. Eng. Denis 
Gasparin [EMAIL PROTECTED]
---
Programmer  System 
Administratorhttp://www.edistar.com
---
  Well alas we've seen it all before
   Knights in armour, days of yore
  The same old fears and the same old crimes
We haven't changed since ancient times

   -- Iron Hand -- Dire Straits --
---


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [GENERAL] VACUUM, 24/7 availability and 7.2

2001-10-16 Thread Denis Gasparin

Ok, little language qui pro quo...

I'm sorry for the error...

Denis

At 15/10/01 17.00, Andrew Sullivan wrote:
On Mon, Oct 15, 2001 at 10:40:17AM +0200, Denis Gasparin wrote:
  
  It's long since done.
 
  == This means that it will not be included in 7.2? I've read 7.2

No, it means it _will_ be included.

A

--

Andrew Sullivan   87 Mowat Avenue
Liberty RMS   Toronto, Ontario Canada
[EMAIL PROTECTED]   M6K 3E3
  +1 416 646 3304 x110


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [GENERAL] VACUUM, 24/7 availability and 7.2

2001-10-15 Thread Denis Gasparin


  More importantly, what is the situation on VACUUM for release 7.2?
  It seems from the pgsql-hackers list that there are plans for
  a none-exclusively locking VACUUM, e.g.:
 
  
 
http://groups.google.com/groups?q=vacuumhl=engroup=comp.databases.postgresql.hackersrnum=1selm=12833.990140724%40sss.pgh.pa.us
 
  (sorry about the long URL); how far advanced are they,

It's long since done.

== This means that it will not be included in 7.2? I've read 7.2 
documentation on line and i've seen that the VACUUM command is changed: 
now, when run in normal mode (giving to the backend the VACUUM command 
without any parameter), the tables don't need to be locked  and also that 
the command does not minimize the space of the database (as instead the 
actual 7.1.3 VACUUM does). From the documentation:

-
Plain VACUUM (without FULL) simply reclaims space and makes it available 
for re-use. This form of the command can operate in parallel with normal 
reading and writing of the table. VACUUM FULL does more extensive 
processing, including moving of tuples across blocks to try to compact the 
table to the minimum number of disk blocks. This form is much slower and 
requires an exclusive lock on each table while it is being processed.
-

This way to do is similar (but not equal) to the LAZY VACUUM specified by 
Tom Lane in the above link...

In conclusion, the new VACUUM command as described above will be include in 
the 7.2 version of Postgresql?

Denis Gasparin: [EMAIL PROTECTED]
---
Programmer  System Administrator - Edistar srl



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] VACUUM, 24/7 availability and 7.2

2001-10-15 Thread Denis Gasparin


  More importantly, what is the situation on VACUUM for release 7.2?
  It seems from the pgsql-hackers list that there are plans for
  a none-exclusively locking VACUUM, e.g.:
 
  
 
http://groups.google.com/groups?q=vacuumhl=engroup=comp.databases.postgresql.hackersrnum=1selm=12833.990140724%40sss.pgh.pa.us
 
  (sorry about the long URL); how far advanced are they,

It's long since done.

== This means that it will not be included in 7.2? I've read 7.2 
documentation on line and i've seen that the VACUUM command is changed: 
now, when run in normal mode (giving to the backend the VACUUM command 
without any parameter), the tables don't need to be locked  and also that 
the command does not minimize the space of the database (as instead the 
actual 7.1.3 VACUUM does). From the documentation:

-
Plain VACUUM (without FULL) simply reclaims space and makes it available 
for re-use. This form of the command can operate in parallel with normal 
reading and writing of the table. VACUUM FULL does more extensive 
processing, including moving of tuples across blocks to try to compact the 
table to the minimum number of disk blocks. This form is much slower and 
requires an exclusive lock on each table while it is being processed.
-

This way to do is similar (but not equal) to the LAZY VACUUM specified by 
Tom Lane in the above link...

In conclusion, the new VACUUM command as described above will be include in 
the 7.2 version of Postgresql?

Denis Gasparin: [EMAIL PROTECTED]
---
Programmer  System Administrator - Edistar srl



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[GENERAL] General database programming question

2001-09-13 Thread Denis Gasparin

Hi to all!
 I want to ask a general database programming question. Here it is...
In a programming language that handles exceptions, where I have to put the 
db.commit/db.rollback statement? These are some solutions...

//db is a generic database connection object

/* SOLUTION 1 */
db.begin(); //begin transaction
try:
 db.query(UPDATE); //Execute a db query
 db.commit();//commit changes to database
except:
 //the query has generated an exception
 db.rollback();

/* SOLUTION 2 */
db.begin(); //begin transaction
try:
 db.query(UPDATE); //Execute a db query
except:
 //the query has generated an exception
 db.rollback();
else:
 //Here executes only if there are no exceptions in try statement
 db.commit();//commit changes to database


Which is the best solution according to your experience? Is there others 
different solutions?

Thank for the tips...
Regards,

Eng. Denis Gasparin: [EMAIL PROTECTED]
---
Programmer  System Administrator - Edistar srl



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [GENERAL]

2001-08-24 Thread Denis Gasparin

It contains 5 records. I have to do vacuum analyze on the table after 
having issued the CREATE INDEX to create the index?

Please, let me know...

Regards,
Denis

At 19.03 23/08/01, Doug McNaught wrote:
Denis Gasparin [EMAIL PROTECTED] writes:

  Hi to all!
I have created a table using the CREATE TABLE new_table
  (col1,col2,col3) AS SELECT col1,col2,col3 FROM org_table.
 
I create an index on this table using the statement:
  CREATE UNIQUE INDEX table_idx ON new_table (col1).
Then i do a select as this:
  SELECT * FROM new_table WHERE col1 = 'value'.
 
The problem is that when i do an explain this is the query plan:
 
  Seq Scan on new_table  (cost=0.00..1116.38 rows=500 width=44)
 
  Can anyone explain me why it doesn't use the index I have created?

How populated is the table?  If it's small, or if you haven't done
VACUUM ANALYZE, the statistics may end up preferring a sequential
scan.

-Doug
--
Free Dmitry Sklyarov!
http://www.freesklyarov.org/

We will return to our regularly scheduled signature shortly.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [GENERAL]

2001-08-24 Thread Denis Gasparin

I have done VACUUM ANALYZE too but the statistics continue preferring 
sequential scan...

Now i'll try to use a different approach:
- i'll create the empty table with a CREATE TABLE (and a primary key on col1)
- then i'll populate it using then INSERT..SELECT statement
- Last i'll check what the statistics say about the SELECT on the primary 
key query.

When i've done, i'll tell you...

Denis

At 19.03 23/08/01, Doug McNaught wrote:
Denis Gasparin [EMAIL PROTECTED] writes:

  Hi to all!
I have created a table using the CREATE TABLE new_table
  (col1,col2,col3) AS SELECT col1,col2,col3 FROM org_table.
 
I create an index on this table using the statement:
  CREATE UNIQUE INDEX table_idx ON new_table (col1).
Then i do a select as this:
  SELECT * FROM new_table WHERE col1 = 'value'.
 
The problem is that when i do an explain this is the query plan:
 
  Seq Scan on new_table  (cost=0.00..1116.38 rows=500 width=44)
 
  Can anyone explain me why it doesn't use the index I have created?

How populated is the table?  If it's small, or if you haven't done
VACUUM ANALYZE, the statistics may end up preferring a sequential
scan.

-Doug
--
Free Dmitry Sklyarov!
http://www.freesklyarov.org/

We will return to our regularly scheduled signature shortly.


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [GENERAL]

2001-08-24 Thread Denis Gasparin

Now i have tried creating the table and the inserting...
The results are the same...
Is it possible that the query planner thinks that is best a sequential scan 
when an index on the table is present?
I'm using postgresql 7.1.3 on a redhat 7.1.

Thanks for the help,
Denis

P.S.: I'm sorry having missed the subject of the mail

At 11.54 24/08/01, Denis Gasparin wrote:
I have done VACUUM ANALYZE too but the statistics continue preferring 
sequential scan...

Now i'll try to use a different approach:
- i'll create the empty table with a CREATE TABLE (and a primary key on col1)
- then i'll populate it using then INSERT..SELECT statement
- Last i'll check what the statistics say about the SELECT on the primary 
key query.

When i've done, i'll tell you...

Denis

At 19.03 23/08/01, Doug McNaught wrote:
Denis Gasparin [EMAIL PROTECTED] writes:

  Hi to all!
I have created a table using the CREATE TABLE new_table
  (col1,col2,col3) AS SELECT col1,col2,col3 FROM org_table.
 
I create an index on this table using the statement:
  CREATE UNIQUE INDEX table_idx ON new_table (col1).
Then i do a select as this:
  SELECT * FROM new_table WHERE col1 = 'value'.
 
The problem is that when i do an explain this is the query plan:
 
  Seq Scan on new_table  (cost=0.00..1116.38 rows=500 width=44)
 
  Can anyone explain me why it doesn't use the index I have created?

How populated is the table?  If it's small, or if you haven't done
VACUUM ANALYZE, the statistics may end up preferring a sequential
scan.

-Doug
--
Free Dmitry Sklyarov!
http://www.freesklyarov.org/

We will return to our regularly scheduled signature shortly.


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[GENERAL] BIGINT datatype and Indexes Failure

2001-08-24 Thread Denis Gasparin

Hi to all!
Is it possible to define indexes on a column with BIGINT datatype? See 
this example:

testdb=# create table a (col1 integer not null primary key);
NOTICE:  CREATE TABLE/PRIMARY KEY will create implicit index 'a_pkey' for 
table 'a'
CREATE
testdb=# create table b (col1 bigint not null primary key);
NOTICE:  CREATE TABLE/PRIMARY KEY will create implicit index 'b_pkey' for 
table 'b'
CREATE
testdb=# explain select * from a where col1=123;
NOTICE:  QUERY PLAN:

Index Scan using a_pkey on a  (cost=0.00..8.14 rows=10 width=4)

EXPLAIN
testdb=# explain select * from b where col1=123;
NOTICE:  QUERY PLAN:

Seq Scan on b  (cost=0.00..22.50 rows=10 width=8)

On table a (INTEGER datatype) the search is done using the index.
Instead on table b (BIGINT datatype) the search is always done using the 
seq scan.

Is it a bug?

I use Postgresql 7.1.2 on RedHat 7.1. All the regress tests has been 
completed correctely during installation.

Please, let me know as soon as possible...

Regards,

Eng. Denis Gasparin: [EMAIL PROTECTED]
---
Programmer  System Administrator - Edistar srl


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [GENERAL] BIGINT datatype and Indexes Failure

2001-08-24 Thread Denis Gasparin

I search in the archive and i have found that i have to specify the type of 
the column at the end of the query... so the new query is :

explain select * from b where col1=123::int8;


In this way, the explain output is correct.
The e-mail i found in the archive says that the problem will be solved in 
some future release

At this point, is it safe to use BIGINT datatype and indexes on those fields?

Thank for your answers,

Regards,

Eng. Denis Gasparin: [EMAIL PROTECTED]
---
Programmer  System Administrator - Edistar srl



At 14.57 24/08/01, Denis Gasparin wrote:
Hi to all!
 Is it possible to define indexes on a column with BIGINT 
 datatype? See this example:

testdb=# create table a (col1 integer not null primary key);
NOTICE:  CREATE TABLE/PRIMARY KEY will create implicit index 'a_pkey' for 
table 'a'
CREATE
testdb=# create table b (col1 bigint not null primary key);
NOTICE:  CREATE TABLE/PRIMARY KEY will create implicit index 'b_pkey' for 
table 'b'
CREATE
testdb=# explain select * from a where col1=123;
NOTICE:  QUERY PLAN:

Index Scan using a_pkey on a  (cost=0.00..8.14 rows=10 width=4)

EXPLAIN
testdb=# explain select * from b where col1=123;
NOTICE:  QUERY PLAN:

Seq Scan on b  (cost=0.00..22.50 rows=10 width=8)

On table a (INTEGER datatype) the search is done using the index.
Instead on table b (BIGINT datatype) the search is always done using the 
seq scan.

Is it a bug?

I use Postgresql 7.1.2 on RedHat 7.1. All the regress tests has been 
completed correctely during installation.

Please, let me know as soon as possible...

Regards,

Eng. Denis Gasparin: [EMAIL PROTECTED]
---
Programmer  System Administrator - Edistar srl


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[GENERAL]

2001-08-23 Thread Denis Gasparin

Hi to all!
I have created a table using the CREATE TABLE new_table (col1,col2,col3) 
AS SELECT col1,col2,col3 FROM org_table.
I create an index on this table using the statement:
CREATE UNIQUE INDEX table_idx ON new_table (col1).
Then i do a select as this:
SELECT * FROM new_table WHERE col1 = 'value'.

The problem is that when i do an explain this is the query plan:

Seq Scan on new_table  (cost=0.00..1116.38 rows=500 width=44)


Can anyone explain me why it doesn't use the index I have created?

Thank you for you help...

Bye, Denis


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[GENERAL] Pgsql vs Interbase: Transaction benchmark

2001-06-01 Thread Denis Gasparin

Hi to all!
 I'm doing some benchmarks to test Interbase vs PostgreSQL. The 
test are done with the respective PHP client. The two servers are installed 
on the same machine with RedHat 7.0.
Pgsql has been started with these configuration options:
 - sort_mem = 512
 - fsync = false
 - shared_buffers = 1024
 - commit_delay = 0
 - commit_siblings = 1
Interbase is installed with the default options.

The test consists of these operations:
- START TRANSACTION
  - for i = 1 to 100
   - SELECT
   - UPDATE (on the same row of the select)
  - end for
- END TRANSACTION

Each transaction is then repeated 100 times by 10 different clients. The 
transaction type is READ_COMMITTED in both servers.
All the operations are perfomed in the same table with 1.000.000 of records 
and the searches (those specified by the WHERE clause of SELECT and UPDATE) 
are done only on the primary key of the table itself.
I calculated the mean duration of a transaction and the machine load 
(obtained using the w command). The results are:

INTERBASE (6.0.1)
Average Machine Load: 5.00
Duration of a transaction: 23 s

POSTGRESQL (7.1.1)
Average Machine Load: 10.00
Duration of a transaction: 40 s

I've also done a test without client concurrent select/updates (with only 
one client...) and the results are:
INTERBASE
Average Machine Load: 0.40
Duration of a transaction: 7.5 s

POSTGRESQL
Average Machine Load: 1.10
Duration of a transaction: 6.4 s

Is there anything I can do to speed-up PostgreSQL? Is there anything wrong 
in my configuration parameters?

The machine has 128Mb of memory and the processor is a PII 350Mhz.


Thank you in advance for your replyies and comments,

Denis
---
[EMAIL PROTECTED]



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [GENERAL] Pgsql vs Interbase: Transaction benchmark

2001-06-01 Thread Denis Gasparin

The table definition is:
CREATE TABLE ADDRESS (
 GROUPID INTEGER NOT NULL,
 ADDRESSID INTEGER NOT NULL,
 NAME VARCHAR(256) NOT NULL,
 SURNAME VARCHAR(256) NOT NULL,
 ADDRESS VARCHAR(256),
 PHONE VARCHAR(256),
 EMAIL VARCHAR(256),
 FAX VARCHAR(256),
 PRIMARY KEY(GROUPID,ADDRESSID)
);

The explain command gives me these results:
explain SELECT * FROM ADDRESS1 WHERE GROUPID = 5 AND ADDRESSID = 1000;
NOTICE:  QUERY PLAN:

Index Scan using address1_pkey on address1  (cost=0.00..2.02 rows=1 width=92)

(PS: There are 100 groups of 1 recors each = 1.000.000 records)


Denis

At 16.14 01/06/01, Tom Lane wrote:
Denis Gasparin [EMAIL PROTECTED] writes:
  All the operations are perfomed in the same table with 1.000.000 of 
 records
  and the searches (those specified by the WHERE clause of SELECT and 
 UPDATE)
  are done only on the primary key of the table itself.

Have you checked (with EXPLAIN) that you're actually getting indexscan
plans?

 regards, tom lane


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [GENERAL] Kylix, dbexpress PostgreSql

2001-05-29 Thread Denis Gasparin

Zeos (www.zeoslib.org) has released the last version of their library and 
now it supports kylix! Obviously, it isn't dbexpress but it uses directly 
the pgsql.so library. Give it a try! It works...

Denis Gasparin

At 23.16 28/05/01, [EMAIL PROTECTED] wrote:
Hello,

There is one under development. Alternatively you could try and modify the
GNU ObjectPascal driver for use in Kylix.

J

On Mon, 28 May 2001, Denis Gasparin wrote:

  Hi to all!
I searched all the pgsql mailing list (and the internet) to find 
 if there
  is a dbexpress driver or some way to get pgsql into Borland Kylix. The only
  solution i've found is the beta driver from EasySoft dbExpress Gateway to
  ODBC.
Is there another way or is there any plan to support Kylix  
 dbExpress?
 
  Please, let me know: i'm going to develop a program in kylix and i'm
  considering what db platform to use. If isn't there another way i've to
  choose Interbase instead of pgsql (with very very much regret...).
 
  Thank to all,
  Denis Gasparin
  ---
  [EMAIL PROTECTED]
 
 
  ---(end of broadcast)---
  TIP 5: Have you checked our extensive FAQ?
 
  http://www.postgresql.org/users-lounge/docs/faq.html
 


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]