[GENERAL] xpath

2010-02-10 Thread Allan Kamau
Hi,
I am running postgreSQL-8.4.2. I have a table that stores a single xml
document per row in one of it's fields. I would like to use xpath to
retrieve portions of these xml documents.
Is there a way to do so. (I am running postgreSQL 8.4.2 configured
(built) with --with-libxml and --with-libxslt options)

I have looked at 'xpath' but I am unable to get it work for table fields.

The command below works.
SELECT xpath('/doc/name/@first','docname first=David
last=Marston/.../doc');

The command below seems not to execute successfully
SELECT a.id,xpath('/doc/name/@first',a.xml_payload) FROM
staging.simple_table a WHERE a.id=1;

HINT:  No function matches the given name and argument types. You
might need to add explicit type casts.


Allan.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] R: One column to multiple columns based on constraints?

2010-02-10 Thread Davor J.
I have limited access to the database. I can not write stored procedures, so it 
has to be pure SQL.

But yes, PL opens other possibilities.

Davor
  BillR iamb...@williamrosmus.com wrote in message 
news:004a01caaa01$7b92ade0$72b809...@com...
  Is there any reason it has to be done in one DML statement? Can you write a 
procedure to this in multiple steps?

   

  BillR

   

  From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Davor J.
  Sent: February-09-10 2:02 PM
  To: pgsql-general@postgresql.org
  Subject: Re: [GENERAL] R: One column to multiple columns based on constraints?

   

  Crosstab is indeed very interesting. Thank you for the suggestion Vincenzo.

   

  regards

  Davor

Vincenzo Romano vincenzo.rom...@notorand.it wrote in message 
news:3eff28921002081133h4b0d7fabm96cc1bc08e579...@mail.gmail.com...

Look for crosstab in the documentation.

  Il giorno 8 feb, 2010 8:21 p., Davor J. dav...@live.com ha scritto:

  Let's say you have a table:
  CREATE TABLE t (
  time date,
  data integer
  )

  Suppose you want a new table that has columns similar to the following:
  (x.time, x.data, y.time, y.data, z.time, z.data) where x.time, y.time 
and
  z.time columns are constrained (for example x.time 2007  AND x.time 
2008,
  y.time 2008 AND y.time  2009, z.time  2010)

  How would you do this. Note that you can not use JOIN as there is no
  relationship.

  Currently I came up with something like this:

  SELECT X.*, (SELECT Y.time, Y.data FROM t AS Y WHERE Y.time = X.time + 1),
  (SELECT Z.time .) FROM t AS X WHERE  X.time 2007  AND X.time 2008

  But it's somewhat awkward. I thought maybe someone has better idea's. Any
  input is welcome.



  --
  Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
  To make changes to your subscription:
  http://www.postgresql.org/mailpref/pgsql-general

  


  Spam/Virus scanning by CanIt Pro 

  For more information see http://www.kgbinternet.com/SpamFilter.htm 

  To control your spam filter, log in at http://filter.kgbinternet.com 


--


  Spam
  Not spam
  Forget previous vote



  __ Information from ESET Smart Security, version of virus signature 
database 4852 (20100209) __

  The message was checked by ESET Smart Security.

  http://www.eset.com


Re: [GENERAL] xpath

2010-02-10 Thread Otandeka Simon Peter
Allan,

Postgres is very strict on variable types and char conversion.  I have a
feeling you are trying to access data from a varchar feild using an
integer...

Can you paste here your schema for that table?

P.

On Wed, Feb 10, 2010 at 11:06 AM, Allan Kamau kamaual...@gmail.com wrote:

 Hi,
 I am running postgreSQL-8.4.2. I have a table that stores a single xml
 document per row in one of it's fields. I would like to use xpath to
 retrieve portions of these xml documents.
 Is there a way to do so. (I am running postgreSQL 8.4.2 configured
 (built) with --with-libxml and --with-libxslt options)

 I have looked at 'xpath' but I am unable to get it work for table fields.

 The command below works.
 SELECT xpath('/doc/name/@first','docname first=David
 last=Marston/.../doc');

 The command below seems not to execute successfully
 SELECT a.id,xpath('/doc/name/@first',a.xml_payload) FROM
 staging.simple_table a WHERE a.id=1;

 HINT:  No function matches the given name and argument types. You
 might need to add explicit type casts.


 Allan.

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



[GENERAL] Extending SQL in C using VARIABLE length type

2010-02-10 Thread Carsten Kropf
Hello everybody,
I am quite a novice in using the extension features of the PostgreSQL database. 
Actually, I have to do this for work at the university. At the moment, I am 
trying around a little bit with creating my own types using shared objects, 
written in C. The usage of static types with fixed length was actually no 
problem for me, so I proceeded to variable length types.
I created an n-dimensional point structure called PointND that contains a 
field of float8 values of dynamic length. I also put in a int4/int32 field for 
the length specification, as required by the documentation. So the structure 
looks like the following:
struct PointND
{
int32 dimensions;
float8 coordinates[1];
};
I hope, that at least this layout is as it is required. Together with this 
type, I also provide the required in and out functions to convert the structure 
to the internal/external representation properly.
The in/out functions work properly when giving the following statement (I also 
tried it using a debugger):
select '(4,5,6)'::pointnd;
pointnd 

 (4.00, 5.00, 6.00)
(1 row)

So it seems, that at least these functions do what they are supposed to.
The problem I have is that if I now create a table that should store entries of 
the type pointnd, it won't store them actually. If I do an insert like the 
following:
insert into test (point) values ('(5,16,6)'::pointnd);
INSERT 0 1

I get the feedback that one new row has been created. Actually this row has 
been created and the in function is also called (I also checked this using the 
debugger). Now, I would have expected something like the following, when 
querying the table:
select * from test;
 point  

 (5.00, 16.00, 6.00)

But, actually I get the following:
select * from test;
 point  

 (0.00, 0.00, 0.00)

The SQL-Script used to create the type can be seen here:
CREATE TYPE pointnd(
INTERNALLENGTH = VARIABLE,
ALIGNMENT=DOUBLE,
INPUT=pointnd_in,
OUTPUT=pointnd_out,
RECEIVE=pointnd_recv,
SEND=pointnd_send,
STORAGE=PLAIN
);

I played around with the parameters a little bit, but still don't know where 
this behaviour comes from. Actually, I was thinking that I conform to the 
requirements given by Postgres after having read the documentation. Storage 
type set to another method (like MAIN) will result in a segmentation fault, 
though.
I would be very glad, if somebody could provide me some help to this issue 
because I could proceed with my actual work, after that.
Thank you in advance

Best regards

Carsten Kropf



Re: [GENERAL] One column to multiple columns based on constraints?

2010-02-10 Thread John R Pierce

Davor J. wrote:

Let's say you have a table:
CREATE TABLE t (
time date,
data integer
)

Suppose you want a new table that has columns similar to the following:
(x.time, x.data, y.time, y.data, z.time, z.data) where x.time, y.time and 
z.time columns are constrained (for example x.time 2007  AND x.time 2008, 
y.time 2008 AND y.time  2009, z.time  2010)


How would you do this. Note that you can not use JOIN as there is no 
relationship.


Currently I came up with something like this:

SELECT X.*, (SELECT Y.time, Y.data FROM t AS Y WHERE Y.time = X.time + 1), 
(SELECT Z.time .) FROM t AS X WHERE  X.time 2007  AND X.time 2008
  



Um, why can't you use a join?

SELECT X.*, Y.time, Y.data FROM t AS X JOIN t as Y ON (Y.time = X.time + '1 year'::INTERVAL) 
	WHERE  X.time = '2007-01-01'::DATE  AND X.time  '2008-01-01'::DATE;




I believe should be functionally equivalent to your nested select.   I'm 
not real sure what you're trying to imply with your date  integer 
comparisions, so I tried to be a little more rigorous there.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] xpath

2010-02-10 Thread Allan Kamau
On Wed, Feb 10, 2010 at 11:34 AM, Otandeka Simon Peter
sotand...@gmail.com wrote:
 Allan,

 Postgres is very strict on variable types and char conversion.  I have a
 feeling you are trying to access data from a varchar feild using an
 integer...

 Can you paste here your schema for that table?

 P.

 On Wed, Feb 10, 2010 at 11:06 AM, Allan Kamau kamaual...@gmail.com wrote:

 Hi,
 I am running postgreSQL-8.4.2. I have a table that stores a single xml
 document per row in one of it's fields. I would like to use xpath to
 retrieve portions of these xml documents.
 Is there a way to do so. (I am running postgreSQL 8.4.2 configured
 (built) with --with-libxml and --with-libxslt options)

 I have looked at 'xpath' but I am unable to get it work for table fields.

 The command below works.
 SELECT xpath('/doc/name/@first','docname first=David
 last=Marston/.../doc');

 The command below seems not to execute successfully
 SELECT a.id,xpath('/doc/name/@first',a.xml_payload) FROM
 staging.simple_table a WHERE a.id=1;

 HINT:  No function matches the given name and argument types. You
 might need to add explicit type casts.


 Allan.

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



As advised by Peter,
Below is an example (including the ddl and dml statements), it drops
and creates a table called simple_table and a sequence called
simple_table_seq both in the public schema. Please ensure this
objects if prexisting are not of importance to you.

DROP SEQUENCE IF EXISTS simple_table_seq CASCADE;
CREATE SEQUENCE simple_table_seq;
DROP TABLE IF EXISTS simple_table CASCADE;
CREATE TABLE simple_table
(id INTEGER NOT NULL DEFAULT NEXTVAL('simple_table_seq')
,xml_payload TEXT
,PRIMARY KEY(id)
)
;
INSERT INTO simple_table
(
id
,xml_payload
)
SELECT
nextval('simple_table_seq')AS id
,'docname first=David last=Marston/some text/doc' AS xml_payload
;
SELECT a.id,a.xml_payload FROM simple_table a LIMIT 1;
SELECT xpath('/doc/name/@first',SELECT a.xml_payload FROM simple_table
a LIMIT 1);
SELECT a.id,xpath('/doc/name/@first',a.xml_payload) FROM simple_table a LIMIT 1;

SELECT xpath('/doc/name/@first','docname first=David
last=Marston/some text/doc');

DROP SEQUENCE IF EXISTS simple_table_seq CASCADE;
DROP TABLE IF EXISTS simple_table CASCADE;


Allan.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] xpath

2010-02-10 Thread Allan Kamau
As advised by Peter,
Below is an example (including the ddl and dml statements), it _drops_
and creates a table called simple_table and a sequence called
simple_table_seq both in the public schema.

DROP SEQUENCE IF EXISTS simple_table_seq CASCADE;
CREATE SEQUENCE simple_table_seq;
DROP TABLE IF EXISTS simple_table CASCADE;
CREATE TABLE simple_table
(id INTEGER NOT NULL DEFAULT NEXTVAL('simple_table_seq')
,xml_payload TEXT
,PRIMARY KEY(id)
)
;
INSERT INTO simple_table
(
id
,xml_payload
)
SELECT
nextval('simple_table_seq')AS id
,'docname first=David last=Marston/some text/doc' AS xml_payload
;
SELECT a.id,a.xml_payload FROM simple_table a LIMIT 1;
SELECT xpath('/doc/name/@first',SELECT a.xml_payload FROM simple_table
a LIMIT 1);
SELECT a.id,xpath('/doc/name/@first',a.xml_payload) FROM simple_table a LIMIT 1;

SELECT xpath('/doc/name/@first','docname first=David
last=Marston/some text/doc');

DROP SEQUENCE IF EXISTS simple_table_seq CASCADE;
DROP TABLE IF EXISTS simple_table CASCADE;


Allan.

On Wed, Feb 10, 2010 at 11:34 AM, Otandeka Simon Peter
sotand...@gmail.com wrote:
 Allan,

 Postgres is very strict on variable types and char conversion.  I have a
 feeling you are trying to access data from a varchar feild using an
 integer...

 Can you paste here your schema for that table?

 P.

 On Wed, Feb 10, 2010 at 11:06 AM, Allan Kamau kamaual...@gmail.com wrote:

 Hi,
 I am running postgreSQL-8.4.2. I have a table that stores a single xml
 document per row in one of it's fields. I would like to use xpath to
 retrieve portions of these xml documents.
 Is there a way to do so. (I am running postgreSQL 8.4.2 configured
 (built) with --with-libxml and --with-libxslt options)

 I have looked at 'xpath' but I am unable to get it work for table fields.

 The command below works.
 SELECT xpath('/doc/name/@first','docname first=David
 last=Marston/.../doc');

 The command below seems not to execute successfully
 SELECT a.id,xpath('/doc/name/@first',a.xml_payload) FROM
 staging.simple_table a WHERE a.id=1;

 HINT:  No function matches the given name and argument types. You
 might need to add explicit type casts.


 Allan.

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] when a table was last vacuumed

2010-02-10 Thread AI Rumman
If it possible to find out when a table was last vacuumed?


Re: [GENERAL] logging statements from hibernate to valid SQL

2010-02-10 Thread Willy-Bas Loos
On Wed, Feb 10, 2010 at 8:52 AM, Andy Dale andy.d...@gmail.com wrote:

 Hi,

 I would consider telling Hibernate to log the SQL it is generating to a
 file. This can be done by setting the logging category org.hibernate.SQL to
 debug, and for the parameters used in the prepared statements I think you
 must also enable org.hibernate.type on debug (I have not managed to get this
 working under JBoss though).

 The produced output should look something like so:

 2010-02-10 08:04:18,726 DEBUG [org.hibernate.SQL]
 /* named HQL query MessagingSession.findMessages */ select
 message0_.ID_ as col_0_0_
 from
 JBPM_MESSAGE message0_
 where
 message0_.DESTINATION_=?
 and message0_.ISSUSPENDED_true
 and (
 message0_.EXCEPTION_ is null
 )

 Cheers,

 Andy

 Hi Andy, thanks for replying.
I tried logging from hibernate before i tried in postgres logging, but there
the values are replaced with question marks.
I tried what you suggested (added   logger name=org.hibernate.SQL
level=debug/ and   logger name=org.hibernate.type level=debug/ to
logback.xml)
but there are no messages from org.hibernate.type in the log, and there are
still question marks instead of values.

2010-02-10 09:44:04,228 DEBUG org.hibernate.SQL:401 - select nextval
('schema.sequence')
2010-02-10 09:44:04,231 DEBUG org.hibernate.SQL:401 - insert into
schema.tabe (field1, field2, field3, ...) values (?, ?, ?, ...)
The actual values are not even logged at all. I'm looking to improve that
somehow, but to no success so far.

So i was hoping that i could use postgresql logging to catch the SQL that i
need..

Cheers,

WBL

-- 
Patriotism is the conviction that your country is superior to all others
because you were born in it. -- George Bernard Shaw


Re: [GENERAL] when a table was last vacuumed

2010-02-10 Thread Guillaume Lelarge
Le 10/02/2010 10:48, AI Rumman a écrit :
 If it possible to find out when a table was last vacuumed?
 

SELECT schemaname, relname, last_vacuum, last_autovacuum
FROM pg_stat_all_tables;


-- 
Guillaume.
 http://www.postgresqlfr.org
 http://dalibo.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory Usage and OpenBSD

2010-02-10 Thread Anton Maksimenkov
2010/2/10 Martijn van Oosterhout klep...@svana.org:
 Can anybody briefly explain me how one postgres process allocate
 memory for it needs?

 There's no real maximum, as it depends on the exact usage. However, in
 general postgres tries to keep below the values in work_mem and
 maintainence_workmem. Most of the allocations are quite small, but
 postgresql has an internal allocator which means that the system only
 sees relatively large allocations.

These relatively large allocations are exactly what I mean. What
size are they?
Is it right to say that these allocations are work_mem size, or
temp_buffers size, or maintainence_workmem size? Or something like.
-- 
antonvm

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] One column to multiple columns based on constraints?

2010-02-10 Thread Davor J.
Thank you for the input John.

You understood my sketch just fine and your JOIN is indeed equivalent to the 
nested select. I said there is no relationship, but in my nested select I 
implicitly created a relationship. I should have been more explicit here: 
what I meant is that there should be no relationship.

From what I know of SQL, one always needs a relationship to append some row 
to the one from FROM clause. I want to append them without a relationship. 
So if my base table t has columns (time and data), I want a new table 
which has columns (time2008, data2008, time2009, data2009, time2010, 
data2010,...) where rows of time2009 and data2009 are constrained by 'year 
2008' , but are in no relationship with the rows of time2008. (NULL should 
be used if there are more in year2008 column, than in year2009 column, vice 
versa.)

Regards,
Davor

John R Pierce pie...@hogranch.com wrote in message 
news:4b72729d.7020...@hogranch.com...
 Davor J. wrote:
 Let's say you have a table:
 CREATE TABLE t (
 time date,
 data integer
 )

 Suppose you want a new table that has columns similar to the following:
 (x.time, x.data, y.time, y.data, z.time, z.data) where x.time, y.time 
 and z.time columns are constrained (for example x.time 2007  AND x.time 
 2008, y.time 2008 AND y.time  2009, z.time  2010)

 How would you do this. Note that you can not use JOIN as there is no 
 relationship.

 Currently I came up with something like this:

 SELECT X.*, (SELECT Y.time, Y.data FROM t AS Y WHERE Y.time = X.time + 
 1), (SELECT Z.time .) FROM t AS X WHERE  X.time 2007  AND X.time 2008



 Um, why can't you use a join?

 SELECT X.*, Y.time, Y.data FROM t AS X JOIN t as Y ON (Y.time = X.time + 
 '1 year'::INTERVAL) WHERE  X.time = '2007-01-01'::DATE  AND X.time  
 '2008-01-01'::DATE;



 I believe should be functionally equivalent to your nested select.   I'm 
 not real sure what you're trying to imply with your date  integer 
 comparisions, so I tried to be a little more rigorous there.



 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general
 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] when a table was last vacuumed

2010-02-10 Thread AI Rumman
But I am using Postgresql 8.1 and here no clumn named last_vacuum.

On Wed, Feb 10, 2010 at 4:12 PM, Guillaume Lelarge
guilla...@lelarge.infowrote:

 Le 10/02/2010 10:48, AI Rumman a écrit :
  If it possible to find out when a table was last vacuumed?
 

 SELECT schemaname, relname, last_vacuum, last_autovacuum
 FROM pg_stat_all_tables;


 --
 Guillaume.
  http://www.postgresqlfr.org
  http://dalibo.com



Re: [GENERAL] when a table was last vacuumed

2010-02-10 Thread Glyn Astill
--- On Wed, 10/2/10, AI Rumman rumman...@gmail.com wrote:

 If it possible to find out when a table
 was last vacuumed?

Try:

select pg_stat_get_last_vacuum_time(oid) from pg_catalog.pg_class where 
relname = 'tablename';

select pg_stat_get_last_autovacuum_time(oid) from pg_catalog.pg_class where 
relname = 'tablename';




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extending SQL in C using VARIABLE length type

2010-02-10 Thread Yeb Havinga


Carsten Kropf wrote:
The usage of static types with fixed length was actually no problem 
for me, so I proceeded to variable length types.
I created an n-dimensional point structure called PointND that 
contains a field of float8 values of dynamic length. I also put in a 
int4/int32 field for the length specification, as required by the 
documentation. So the structure looks like the following:

struct PointND
{
int32 dimensions;
float8 coordinates[1];
};
The structure should begin with a int32 vl_len_ header. At creation / 
palloc time the size must be set with the SET_VARSIZE macro, and the 
size can be queried with the VARSIZE_ * macros - doxygen.postgresql.org 
is your friend here. Take a look at e.g. contrib/cube for examples.


regards,
Yeb Havinga


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extending SQL in C using VARIABLE length type

2010-02-10 Thread Carsten Kropf
Actually, I thought, I did this using the int32 variable called dimension 
which should be exactly this field. Unfortunately, it seems, that something is 
wrong here. I'll look inside the code of cube to determine the things I'm doing 
wrong, currently. Thanks so far for your advice.
My in-method is the following:
Datum pointnd_in(PG_FUNCTION_ARGS)
{
PointND * in = NULL;
// access the format string
char * in_string = PG_GETARG_CSTRING(0);
char * save = in_string;
// 1024 bytes should be sufficient for one coordinate
char curr_buffer[1024];
bool corrupted = false;
float8 * coordinates = NULL;
unsigned int dimensions = 0, i = 0;

coordinates = (float8 *) palloc(sizeof(float8));

// allocate the memory
// read bytewise and count the ',' in order to determine the amount of 
dimensions, after that: parse the point
in_string = find_char(in_string, '(');
// next sign after opening bracket
++in_string;
// read as long, as no closing bracket has been found
for (dimensions = 0; *in_string != ')'  !corrupted; ++dimensions)
{
// clear the memory
memset(curr_buffer, 0, 1024);
for (i = 0; *in_string != ','  *in_string != 0  *in_string 
!= ')'; ++in_string, ++i)
{
if ((*in_string  '0' || *in_string  '9')  
*in_string != '.')
{
corrupted = true;
break;
}
// copy current sign
curr_buffer[i] = *in_string;
}

// something has happened here (no valid number)
if (corrupted)
{
break;
}
coordinates = (float8 *) repalloc(coordinates, (dimensions + 1) 
* sizeof(float8));
//sscanf(curr_buffer, %f, coordinates[dimensions]);
coordinates[dimensions] = strtod(curr_buffer, NULL);

// if we have a comma here, skip it
if (*in_string == ',')
{
++in_string;
}
if (*in_string == ' ')
{
// skip space
++in_string;
}
}

// something lead to a corruption of the point
if (corrupted)
{
ereport(ERROR, (errcode(ERRCODE_INVALID_OBJECT_DEFINITION), 
errmsg(invalid representation of a point: %s, has to look like 
\(coord1,coord2,...)\, save)));
}
else
{
in = (PointND *) palloc(sizeof(float8) * dimensions + VARHDRSZ);
SET_VARSIZE(in, dimensions);
// copy the coordinates to the data area
memcpy((void *) VARDATA(in), (void *) coordinates, (dimensions) 
* sizeof(float8));
}

PG_RETURN_POINTER(in);
}

So, probably in here, I am doing something terribly wrong and you could correct 
me at this point. As I already said, I thought that my int32 dimensions 
variable would represent exactly this vl_len_header field. The remaining stuff 
is OK then? That means the creation script and information about storage 
alignment and layout or do you detect addititonal errors?
I know, that this code might not be the best, but I am still in the testing 
phase of how to achieve sth using Postgres with C.
Thanks so far for your advice
Best regards
Carsten Kropf
Am 10.02.2010 um 11:39 schrieb Yeb Havinga:

 
 Carsten Kropf wrote:
 The usage of static types with fixed length was actually no problem for me, 
 so I proceeded to variable length types.
 I created an n-dimensional point structure called PointND that contains a 
 field of float8 values of dynamic length. I also put in a int4/int32 field 
 for the length specification, as required by the documentation. So the 
 structure looks like the following:
 struct PointND
 {
 int32 dimensions;
 float8 coordinates[1];
 };
 The structure should begin with a int32 vl_len_ header. At creation / palloc 
 time the size must be set with the SET_VARSIZE macro, and the size can be 
 queried with the VARSIZE_ * macros - doxygen.postgresql.org is your friend 
 here. Take a look at e.g. contrib/cube for examples.
 
 regards,
 Yeb Havinga
 



Re: [GENERAL] when a table was last vacuumed

2010-02-10 Thread Guillaume Lelarge
Le 10/02/2010 11:23, AI Rumman a écrit :
 But I am using Postgresql 8.1 and here no clumn named last_vacuum.
 

On this old release, the only way you can find such an information is
via the logfile and using DEBUG1 or DEBUG2 level. Not something I would
recommend for a production server.


-- 
Guillaume.
 http://www.postgresqlfr.org
 http://dalibo.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extending SQL in C using VARIABLE length type

2010-02-10 Thread Yeb Havinga

Carsten Kropf wrote:
Actually, I thought, I did this using the int32 variable called 
dimension which should be exactly this field.

yes.

in = (PointND *) palloc(sizeof(float8) * dimensions + VARHDRSZ);
SET_VARSIZE(in, dimensions);

What about

len = sizeof(float8) * dimensions + VARHDRSZ;
in = (PointND *) palloc0(len);
SET_VARSIZE(in, len);



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extending SQL in C using VARIABLE length type

2010-02-10 Thread Carsten Kropf
Oh, I see, does the VARSIZE length field have to be the total number of bytes 
occupied (including VARHDRSZ and the size of the structure) or only the size 
that is used by my datatype? Then it would become pretty much obvious, why 
this is not supposed to work.
I'll try it out then.

regards
Carsten Kropf
Am 10.02.2010 um 12:04 schrieb Yeb Havinga:

 Carsten Kropf wrote:
 Actually, I thought, I did this using the int32 variable called dimension 
 which should be exactly this field.
 yes.
 in = (PointND *) palloc(sizeof(float8) * dimensions + VARHDRSZ);
 SET_VARSIZE(in, dimensions);
 What about
 
 len = sizeof(float8) * dimensions + VARHDRSZ;
 in = (PointND *) palloc0(len);
 SET_VARSIZE(in, len);
 
 
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: 8.5devel: unexpected and illogical error during transaction, but transaction don't rolled back

2010-02-10 Thread Greg Stark
On Wed, Feb 10, 2010 at 6:59 AM, A. Kretschmer
andreas.kretsch...@schollglas.com wrote:
 test=*# analyse table_a;
 ERROR:  canceling autovacuum task
 CONTEXT:  automatic vacuum of table test.public.table_a
 ANALYZE
 Time: 1235,600 ms


 I think, that's not an ERROR, just a NOTICE for me. And yes, the
 transaction isn't rolled back, so it isn't an error.

Did you start the server from the same terminal? I think the ERROR and
CONTEXT line come from the server, not psql and are expected
behaviour.


-- 
greg

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extending SQL in C using VARIABLE length type

2010-02-10 Thread Yeb Havinga

Carsten Kropf wrote:

Oh, I see, does the VARSIZE length field have to be the total number of bytes occupied 
(including VARHDRSZ and the size of the structure) or only the size that is used by 
my datatype?

Yes

 Then it would become pretty much obvious, why this is not supposed to work.
I'll try it out then.
  
My €0,02: rename the dimensions to vl_len_ to avoid confusion and get 
compiler errors where you now use 'dimension'. Add a macro that converts 
a pointnd structure to dimension int by taking the VARSIZE_ANY_EXHDR / 
sizeof(float8) and use it where dimension is used now. Or if your 
database is small you could keep dimension in the structure.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 8.5devel: unexpected and illogical error during transaction, but transaction don't rolled back

2010-02-10 Thread A. Kretschmer
In response to Greg Stark :
 On Wed, Feb 10, 2010 at 6:59 AM, A. Kretschmer
 andreas.kretsch...@schollglas.com wrote:
  test=*# analyse table_a;
  ERROR:  canceling autovacuum task
  CONTEXT:  automatic vacuum of table test.public.table_a
  ANALYZE
  Time: 1235,600 ms
 
 
  I think, that's not an ERROR, just a NOTICE for me. And yes, the
  transaction isn't rolled back, so it isn't an error.
 
 Did you start the server from the same terminal? I think the ERROR and
 CONTEXT line come from the server, not psql and are expected
 behaviour.

Hi Greg, back from FOSDEM?

Yes, I started the server from the same terminal. It's my private PC
@home, I'm remote there. First, i have started the Server manually in
the background, then i called psql. So it is my fault, a typically
layer-8 - error.


Thank you ;-)


Regards, Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431  2EB0 389D 1DC2 3172 0C99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PHP and PostgreSQL boolean data type

2010-02-10 Thread Thom Brown
Hi,

A long-standing problem we've had with PostgreSQL queries in PHP is
that the returned data for boolean columns is the string 'f' instead
of the native boolean value of false.

An obvious example of this would be for a table with users and their
boolean registered status:

Select user, registered From users;

Then getting a row from the result would reveal: array('user' =
'thomb', registered = 'f');

Another problem is with arrays, where they are difficult to parse as
they also come through as plain strings with no binary alternative.

 Is this a limitation of libpq or a flawed implementation in the php
library?  And if this is just the case for backwards-compatibility, is
there a way to switch it to a more sensible PHP data type?

Thanks

Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PHP and PostgreSQL boolean data type

2010-02-10 Thread A. Kretschmer
In response to Thom Brown :
 Hi,
 
 A long-standing problem we've had with PostgreSQL queries in PHP is
 that the returned data for boolean columns is the string 'f' instead
 of the native boolean value of false.

http://andreas.scherbaum.la/blog/archives/302-BOOLEAN-datatype-with-PHP-compatible-output.html

Regards, Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431  2EB0 389D 1DC2 3172 0C99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extending SQL in C using VARIABLE length type

2010-02-10 Thread Carsten Kropf
Thanks for the hint according to the cube, this was actually exactly what I 
have been looking for. I wanted to do something similar like the cube but I 
didn't think that it would be implemented in multiple dimension. I just 
thought, the cube were a 3-d construct, but as I see in the sources, it is, in 
fact, n-dimensional. So my problems are solved here. Btw I could manage to get 
my own point to be saved in the database using your hints, thanks for this.

regards
Carsten Kropf
Am 10.02.2010 um 12:20 schrieb Yeb Havinga:

 Carsten Kropf wrote:
 Oh, I see, does the VARSIZE length field have to be the total number of 
 bytes occupied (including VARHDRSZ and the size of the structure) or only 
 the size that is used by my datatype?
 Yes
 Then it would become pretty much obvious, why this is not supposed to work.
 I'll try it out then.
  
 My €0,02: rename the dimensions to vl_len_ to avoid confusion and get 
 compiler errors where you now use 'dimension'. Add a macro that converts a 
 pointnd structure to dimension int by taking the VARSIZE_ANY_EXHDR / 
 sizeof(float8) and use it where dimension is used now. Or if your database is 
 small you could keep dimension in the structure.
 
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PHP and PostgreSQL boolean data type

2010-02-10 Thread Tommy Gildseth

Thom Brown wrote:

 Is this a limitation of libpq or a flawed implementation in the php
library?  And if this is just the case for backwards-compatibility, is
there a way to switch it to a more sensible PHP data type?


Using PDO(http://no.php.net/pdo) will at least give you native values 
for true/false. Arrays, I don't know, since I don't use them.


--
Tommy Gildseth

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PHP and PostgreSQL boolean data type

2010-02-10 Thread Thom Brown
On 10 February 2010 12:11, A. Kretschmer
andreas.kretsch...@schollglas.com wrote:
 In response to Thom Brown :
 Hi,

 A long-standing problem we've had with PostgreSQL queries in PHP is
 that the returned data for boolean columns is the string 'f' instead
 of the native boolean value of false.

 http://andreas.scherbaum.la/blog/archives/302-BOOLEAN-datatype-with-PHP-compatible-output.html


Thanks guys.  I can see that this is specifically a PHP issue then.
It seems like an extreme workaround though.  I'd rather see the PHP
library updated in a way that would somehow not break existing code
which checked for an 'f'.  Not quite sure what the solution would be.

Thanks

Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Large Objects: Sizeof and Deleting Unlinked LOs

2010-02-10 Thread Howard Cole

Is there an SQL function to determine the size of a large object?

Also, can I safely delete all the large objects in 
pg_catalog.pg_largeobject? For example:


select lo_unlink(loid) from (select distinct loid from 
pg_catalog.pg_largeobject) as loids where loid not in (select my_oid 
from my_only_table_that_uses_large_objects)


Or are there other things stored in there that I don't know about!

Thanks.

Howard Cole
www.selestial.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Locking referenced table when creating and dropping tables with foreign key constraints (SOLVED)

2010-02-10 Thread Frank Joerdens
On Tue, Feb 2, 2010 at 12:19 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 frank joerdens fiskad...@gmail.com writes:
 It seems that whenever I create a new empty table with a foreign key
 constraint, the transaction will acquire an exclusive lock on the
 referenced table, locking out other writers (not sure if even readers
 as well), and I don't quite see why that is necessary

 It involves an ALTER TABLE ADD TRIGGER command, which necessarily locks
 out writers to avoid race conditions.  I think at the moment it may take
 an exclusive lock and thereby lock out readers as well.  There has been
 some talk of trying to reduce the lock strength needed for ALTER
 operations, but you should not expect that it'll ever be possible to
 do that without blocking writers.

Turns out that on the main db instance, where the problematic table is
mastered, the FK constraint creation goes through in a couple seconds
which is not long enough to take down the app.

The actual real world issue we had was due to a script which applied
the new tables and constraints across a set of replicated slave
databases as well as on the master, and the script was broken in that
it left the transaction open on the master database while applying the
constraints on the slaves, and the lock was held way longer than
actually needed that way.

So in spite of the fact that we manage to pretty much max out a
16-core/64GB/half-decent RAID box with our OLTP load (and the FK's in
question reference one of the most update-heavy tables), this
basically still works.

Sorry about kicking up a fuss for mostly nothing.

Regards,

Frank

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Orafce concat operator

2010-02-10 Thread Yeb Havinga

Hello list,

The Orafce compatibility package doesn't seem to have operators defined 
(looked in the sql load file). The function I'm specifically interested 
in, is Oracle's concatenation that regards a NULL as the empty string 
and hence returns 'the other value'. This in contrast with Pg's || that 
returns NULL if either of the operands is NULL. The Orafce package 
contains a concat function with Oracle behaviour, however an operator is 
missing.


Having an associative operator has benefits over having only a function, 
since that would make translating expressions like 'monkey' || 'nut' || 
NULL easy.


What about adding something like operator ||| in the orafce package for 
concat?


Regards,
Yeb Havinga


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best way to handle multi-billion row read-only table?

2010-02-10 Thread Asher

John R Pierce wrote:
how do you plan on accessing this monster data?  do you expect to be 
looking up single values or small set of values at a specific time? 
seems to me like this is the sort of data thats more often processed in 
the aggregate, like running a fourier analysis of sliding windows, and 
that sort of data processing may well be more efficiently done with 
fixed block binary files rather than relational databases, as there's no 
real relationships in this data.


The data will initially be accessed via a simple GUI which will allow 
browsing over a subset of the data (subsampled down to 1 
sample/minute/hour, etc. during the data load phase and so massively 
smaller datasets) and then once something interesting has been found 
manually (fully automatic detection of transients has to wait until 
we've formally described what we mean by transient :-)) the start and 
end times can be handed over to our automatic processing code to go 
through the full dataset.


I did consider just sticking the data into a series of big dumb files 
but by putting them in a DB I can both maintain automatic links between 
the full and subsampled data sets and between each data point and the 
equipment that measured it and, possibly more importantly, I can provide 
a simpler interface to the other people on my project to access the 
data. I'm a computer scientist but I'm doing my PhD in the Civil 
Engineering dept and all of my colleagues are civil engineers - all 
quite happy using Matlab's database plugin but less happy writing 
traditional code to crunch through raw files. I'm aware that I'm taking 
a, possibly quite large, performance hit by using a database but I'm 
hoping that the advantages will outweigh this.


Many thanks for all the replies to my query. I'm going to go with a 
partitioned table design and start uploading some data. I'll post how it 
performs once I've got some real size data in it.



Asher.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Orafce concat operator

2010-02-10 Thread Pavel Stehule
2010/2/10 Yeb Havinga yebhavi...@gmail.com:
 Hello list,

 The Orafce compatibility package doesn't seem to have operators defined
 (looked in the sql load file). The function I'm specifically interested in,
 is Oracle's concatenation that regards a NULL as the empty string and hence
 returns 'the other value'. This in contrast with Pg's || that returns NULL
 if either of the operands is NULL. The Orafce package contains a concat
 function with Oracle behaviour, however an operator is missing.

 Having an associative operator has benefits over having only a function,
 since that would make translating expressions like 'monkey' || 'nut' || NULL
 easy.

 What about adding something like operator ||| in the orafce package for
 concat?


no, it could be confusing and it isn't enough, because it isn't only
|| or concat problem. On Oracle empty string is equal to NULL and NULL
is equal to empty string.

example: '' is null, length('')

http://www.thunderguy.com/semicolon/2003/04/26/oracle-empty-string-null/

so we are not able emulate this behave.

Regards
Pavel Stehule



 Regards,
 Yeb Havinga


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] problems maintaining boolean columns in a large table

2010-02-10 Thread Ben Campbell

Richard Huxton wrote:

On 09/02/10 11:25, Ben Campbell wrote:
[I was talking about moving a needs_indexing flag out of a big table 
into it's own table]

But my gut feeling is that the flag would be better off in it's own
table anyway, eg:

CREATE TABLE needs_indexing (
article_id integer references article(id)
);



That sounds sensible to me


Cool - glad to know I'm not suggesting something totally insane! I never 
can quite tell when I'm doing database stuff :-)


Oh - you might want to consider how/whether to handle multiple entries 
for the same article in your queue.


I settled on:

CREATE TABLE needs_indexing (
  article_id integer REFERENCES article(id) PRIMARY KEY
);

The primary key-ness enforces uniqueness, and any time I want to add an 
article to the queue I just make sure I do a DELETE before the INSERT. 
Bound to be more efficient ways to do it, but it works.


Thanks,
Ben.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best way to handle multi-billion row read-only table?

2010-02-10 Thread Greg Stark
On Wed, Feb 10, 2010 at 2:32 PM, Asher as...@piceur.co.uk wrote:
 The data will initially be accessed via a simple GUI which will allow
 browsing over a subset of the data (subsampled down to 1 sample/minute/hour,
 etc.

It sounds like you could use a tool like rrd that keeps various levels
of aggregation and intelligently chooses the right level for the given
query. I think there are such tools though I'm not sure there are any
free ones.

-- 
greg

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory Usage and OpenBSD

2010-02-10 Thread Tom Lane
Martijn van Oosterhout klep...@svana.org writes:
 On Tue, Feb 09, 2010 at 08:19:51PM +0500, Anton Maksimenkov wrote:
 Can anybody briefly explain me how one postgres process allocate
 memory for it needs?

 There's no real maximum, as it depends on the exact usage. However, in
 general postgres tries to keep below the values in work_mem and
 maintainence_workmem. Most of the allocations are quite small, but
 postgresql has an internal allocator which means that the system only
 sees relatively large allocations. The majority will be in the order of
 tens of kilobytes I suspect.

IIRC, the complaint that started this thread was about a VACUUM command
failing.  Plain VACUUM will in fact start out by trying to acquire a
single chunk of size maintenance_work_mem.  (On a small table it might
not be so greedy, but on a large table it will do that.)  So you
probably shouldn't ever try to set that value as large as 1GB if you're
working in a 32-bit address space.  You could maybe do it if you've kept
shared_buffers small, but that seems like the wrong performance tradeoff
in most cases ...

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [PERFORM] [GENERAL] PostgreSQL - case studies

2010-02-10 Thread Ing. Marcos L. Ortiz Valmaseda

El 10/02/2010 6:49, Scott Marlowe escribió:

Quick note, please stick to text formatted email for the mailing list,
it's the preferred format.

On Tue, Feb 9, 2010 at 9:09 PM, Jayadevan M
jayadevan.maym...@ibsplc.com  wrote:
   

Hello all,
Apologies for the long mail.
I work for a company that is provides solutions mostly on a Java/Oracle 
platform. Recently we moved on of our products to PostgreSQL. The main reason 
was PostgreSQL's GIS capabilities and the inability of government departments 
(especially road/traffic) to spend a lot of money for such projects. This 
product is used to record details about accidents and related analysis (type of 
road, when/why etc) with maps. Fortunately, even in India, an accident 
reporting application does not have to handle many tps :).  So, I can't say 
PostgreSQL's performance was really tested in this case.
Later, I tested one screen of one of our products - load testing with Jmeter. 
We tried it with Oracle, DB2, PostgreSQL and Ingres, and PostgreSQL easily 
out-performed the rest. We tried a transaction mix with 20+ SELECTS, update, 
delete and a few inserts.
 

Please note that benchmarking oracle (and a few other commercial dbs)
and then publishing those results without permission of oracle is
considered to be in breech of their contract.  Yeah, another wonderful
aspect of using Oracle.

That said, and as someone who is not an oracle licensee in any way,
this mimics my experience that postgresql is a match for oracle, db2,
and most other databases in the simple, single db on commodity
hardware scenario.

   

After a really good experience with the database, I subscribed to all 
PostgreSQL groups (my previous experience is all-Oracle) and reading these 
mails, I realized that many organizations are using plan, 'not customized'  
PostgreSQL for databases that handle critical applications.  Since there is no 
company trying to 'sell' PostgreSQL, many of us are not aware of such cases.
 

Actually there are several companies that sell pgsql service, and some
that sell customized versions.  RedHat, Command Prompt, EnterpriseDB,
and so on.

   

Could some of you please share some info on such scenarios- where you are 
supporting/designing/developing databases that run into at least a few hundred 
GBs of data (I know, that is small by todays' standards)?
 

There are other instances of folks on the list sharing this kind of
info you can find by searching the archives.  I've used pgsql for
about 10 years for anywhere from a few megabytes to hundreds of
gigabytes, and all kinds of applications.

Where I currently work we have a main data store for a web app that is
about 180Gigabytes and growing, running on three servers with slony
replication. We handle somewhere in the range of 10k to 20k queries
per minute (a mix of 90% or so reads to 10% writes).  Peak load can be
into the 30k or higher reqs / minute.

The two big servers that handle this load are dual quad core opteron
2.1GHz machines with 32Gig RAM and 16 15krpm SAS drives configured as
2 in RAID-1 for OS and pg_xlog, 2 hot spares, and 12 in a RAID-10 for
the main data.  HW Raid controller is the Areca 1680 which is mostly
stable, except for the occasional (once a year or so) hang problem
which has been described, and which Areca has assured me they are
working on.

Our total downtime due to database outages in the last year or so has
been 10 to 20 minutes, and that was due to a RAID card driver bug that
hits us about once every 300 to 400 days.  the majority of the down
time has been waiting for our hosting provider to hit the big red
switch and restart the main server.

Our other pgsql servers provide search facility, with a db size of
around 300Gig, and statistics at around ~1TB.

   

I am sure PostgreSQL has matured a lot more from the days when these case 
studies where posted. I went through the case studies at EnterpiseDB and 
similar vendors too. But those are customized PostgreSQL servers.
 

Not necessarily.  They sell support more than anything, and the
majority of customization is not for stability but for additional
features, such as mpp queries or replication etc.

The real issue you run into is that many people don't want to tip
their hand that they are using pgsql because it is a competitive
advantage.  It's inexpensive, capable, and relatively easy to use.  If
your competitor is convinced that Oracle or MSSQL server with $240k in
licensing each year is the best choice, and you're whipping them with
pgsql, the last thing you want is for them to figure that out and
switch.

   
Following with that subject, there are many apps on the world that are 
using PostgreSQL for its business.
We are planning the design and deployment of the a large PostgreSQL 
Cluster for a DWH-ODS-BI apps.
We are documenting everthing for give the information later to be 
published on the PostgreSQL CaseStudies section.


We are using Slony-I for replication, PgBouncer for pooling 

Re: [GENERAL] [PERFORM] PostgreSQL - case studies

2010-02-10 Thread Kevin Grittner
Jayadevan M jayadevan.maym...@ibsplc.com wrote:
 
 Could some of you please share some info on such scenarios- where
 you are supporting/designing/developing databases that run into at
 least a few hundred GBs of data (I know, that is small by todays'
 standards)?
 
I'm a database administrator for the Wisconsin Courts.  We've got
about 200 PostgreSQL database clusters on about 100 servers spread
across the state.  Databases range from tiny (few MB) to 1.3 TB. 
 
Check out this for more info:
 
http://www.pgcon.org/2009/schedule/events/129.en.html
 
I hope that helps.  If you have any particular questions not
answered by the above, just ask.
 
-Kevin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [PERFORM] PostgreSQL - case studies

2010-02-10 Thread Stephen Frost
* Kevin Grittner (kevin.gritt...@wicourts.gov) wrote:
  Could some of you please share some info on such scenarios- where
  you are supporting/designing/developing databases that run into at
  least a few hundred GBs of data (I know, that is small by todays'
  standards)?

Just saw this, so figured I'd comment:

tsf= \l+
  List of databases
   Name|  Owner   | Encoding |  Collation  |Ctype| Access 
privileges  |  Size   | Tablespace  |Description
---+--+--+-+-++-+-+---
 beac  | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =Tc/postgres 
  | 1724 GB | pg_default  | 

Doesn't look very pretty, but the point is that its 1.7TB.  There's a
few other smaller databases on that system too.  PG handles it quite
well, though this is primairly for data-mining.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [GENERAL] Orafce concat operator

2010-02-10 Thread Yeb Havinga

Pavel Stehule wrote:

What about adding something like operator ||| in the orafce package for
concat?



no, it could be confusing and it isn't enough, because it isn't only
|| or concat problem. On Oracle empty string is equal to NULL and NULL
is equal to empty string.

example: '' is null, length('')

http://www.thunderguy.com/semicolon/2003/04/26/oracle-empty-string-null/

so we are not able emulate this behave.
  

Hi Pavel,

Thanks for your quick reply. What you say is entirely true, however I'm 
not sure if you completely understood my question, so please do not be 
offended if I try to elaborate it a bit:


I did not ask for full '' = NULL emulation, only for an operator to 
match the concat function, so that code conversion with repetetive 
concats are easier (string || string || string etc) which occur 
frequently. Ofcourse a translator can be made that converst that to 
concat(string,concat(string,concat(string etc))), however that's 
confusing too in a different way. So imho the only problem is to think 
of an operator that somehow resembles || so the programmer recognizes a 
concat, but also adds something so the programmer recognizes: not 
strict. What about ||+ ?


And then, at the conversion of e.g. (string || string || string) IS 
NULL, confusion arises, but this is not due to the concat, but more to 
the IS NULL clause together with the strange '' = null, and that must be 
handled otherwise.


So the operator would speed up part of the code conversion.

regards,
Yeb Havinga




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Orafce concat operator

2010-02-10 Thread Pavel Stehule
2010/2/10 Yeb Havinga yebhavi...@gmail.com:
 Pavel Stehule wrote:

 What about adding something like operator ||| in the orafce package for
 concat?


 no, it could be confusing and it isn't enough, because it isn't only
 || or concat problem. On Oracle empty string is equal to NULL and NULL
 is equal to empty string.

 example: '' is null, length('')

 http://www.thunderguy.com/semicolon/2003/04/26/oracle-empty-string-null/

 so we are not able emulate this behave.


 Hi Pavel,

 Thanks for your quick reply. What you say is entirely true, however I'm not
 sure if you completely understood my question, so please do not be offended
 if I try to elaborate it a bit:

 I did not ask for full '' = NULL emulation, only for an operator to match
 the concat function, so that code conversion with repetetive concats are
 easier (string || string || string etc) which occur frequently. Ofcourse a
 translator can be made that converst that to
 concat(string,concat(string,concat(string etc))), however that's confusing
 too in a different way. So imho the only problem is to think of an operator
 that somehow resembles || so the programmer recognizes a concat, but also
 adds something so the programmer recognizes: not strict. What about ||+ ?

 And then, at the conversion of e.g. (string || string || string) IS NULL,
 confusion arises, but this is not due to the concat, but more to the IS NULL
 clause together with the strange '' = null, and that must be handled
 otherwise.

 So the operator would speed up part of the code conversion.


I have a different opinion. You have to change a application source
code. So I don't like it in orafce. Maybe we can implement varchar2
text type and for this type redefine basic functions. But it could be
a messy and maybe contraproductive. Orafce is tool for better
migration, but it isn't full compatibility tool - what can be
implemented effective and well, then can be in orafce. EnterpriseDB do
full compatibility with Oracle

but - I don't see a problem. Everybody who like operator ||| can do it
very simple - not all have to be in Orafce.

Regards
Pavel Stehule

 regards,
 Yeb Havinga





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] more than 2GB data string save

2010-02-10 Thread Peter Hunsberger
On Wed, Feb 10, 2010 at 1:21 AM, Scott Marlowe scott.marl...@gmail.com wrote:

 On Wed, Feb 10, 2010 at 12:11 AM, Steve Atkins st...@blighty.com wrote:
  A database isn't really the right way to do full text search for single 
  files that big. Even if they'd fit in the database it's way bigger than the 
  underlying index types tsquery uses are designed for.
 
  Are you sure that the documents are that big? A single document of that 
  size would be 400 times the size of the bible. That's a ridiculously large 
  amount of text, most of a small library.
 
  If the answer is yes, it's really that big and it's really text then look 
  at clucene or, better, hiring a specialist.

 I'm betting it's something like gene sequences or geological samples,
 or something other than straight text.  But even those bear breaking
 down into some kind of simple normalization scheme don't they?


A single genome is ~ 1.3GB as chars, half that size if you use 4 bits
/ nucleotide (which should work for at least 90% of the use cases).
Simplest design is to store a single reference and then for everything
else store deltas from it.  On average that should require about about
3-5% of your reference sequence per comparative sample  (not counting
FKs and indexes).

As I mentioned on the list a couple of months ago we are in the middle
of stuffing a bunch of molecular data (including entire genomes) into
Postgres.   If anyone else is doing this I would welcome the
opportunity to discuss the issues off list...

--
Peter Hunsberger

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [PERFORM] PostgreSQL - case studies

2010-02-10 Thread David Boreham

Kevin Grittner (kevin.gritt...@wicourts.gov) wrote:

Could some of you please share some info on such scenarios- where
you are supporting/designing/developing databases that run into at
least a few hundred GBs of data (I know, that is small by todays'
standards)?
  

At NuevaSync we use PG in a one-database-per-server design, with our own
replication system between cluster nodes. The largest node has more than 
200G online.

This is an OLTP type workload.







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] more than 2GB data string save

2010-02-10 Thread Massa, Harald Armin
 As I mentioned on the list a couple of months ago we are in the middle
 of stuffing a bunch of molecular data (including entire genomes) into
 Postgres.   If anyone else is doing this I would welcome the
 opportunity to discuss the issues off list...


I do not stuff molecules or genomes or genomdata into PostgreSQL, but I sure
would love to read a case study about it or listen to a talk at
pgday.euabout this stuff.

Best wishes,

Harald


-- 
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Spielberger Straße 49
70435 Stuttgart
0173/9409607
no fx, no carrier pigeon
-
%s is too gigantic of an industry to bend to the whims of reality


[GENERAL] need clean way to copy col vals from one rec to another

2010-02-10 Thread Gauthier, Dave
create table foo (name text, company text,  job text);
insert into foo (name,company,job) values ('joe','ge','engineer');
insert into foo (name) values ('sue');

What I want to do is map joe's company and job over to the sue record, ending 
up with
'sue' 'ge' 'engineer'

Is there a quick/clever.efficient way to do this?

Thanks in Advance


Re: [GENERAL] Best way to handle multi-billion row read-only table?

2010-02-10 Thread Vincenzo Romano
2010/2/10 Greg Stark gsst...@mit.edu:
 On Wed, Feb 10, 2010 at 2:32 PM, Asher as...@piceur.co.uk wrote:
 The data will initially be accessed via a simple GUI which will allow
 browsing over a subset of the data (subsampled down to 1 sample/minute/hour,
 etc.

 It sounds like you could use a tool like rrd that keeps various levels
 of aggregation and intelligently chooses the right level for the given
 query. I think there are such tools though I'm not sure there are any
 free ones.

Use as much memory as possible to fit indexes as well as portions of
the table space itself in RAM.
Of course, poor indexing can kill any effort.

-- 
Vincenzo Romano
NotOrAnd Information Technologies
NON QVIETIS MARIBVS NAVTA PERITVS

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] need clean way to copy col vals from one rec to another

2010-02-10 Thread Alban Hertroys
On 10 Feb 2010, at 17:28, Gauthier, Dave wrote:

 create table foo (name text, company text,  job text);
 insert into foo (name,company,job) values (‘joe’,’ge’,’engineer’);
 insert into foo (name) values (‘sue’);
  
 What I want to do is map joe’s company and job over to the sue record, ending 
 up with
 ‘sue’ ‘ge’ ‘engineer’
  
 Is there a quick/clever.efficient way to do this?

UPDATE foo SET company = joe.company, job = joe.job
  FROM foo AS joe
 WHERE foo.name = 'sue'
   AND joe.name = 'joe';

You could also do this on insert by using:

INSERT INTO foo (name, company, job)
SELECT 'sue', joe.company, joe.job
  FROM foo AS joe
 WHERE joe.name = 'joe';

Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,4b72e2dd10446151245148!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] need clean way to copy col vals from one rec to another

2010-02-10 Thread Gauthier, Dave
Outstanding !  
Thanks Alban.

-Original Message-
From: Alban Hertroys [mailto:dal...@solfertje.student.utwente.nl] 
Sent: Wednesday, February 10, 2010 11:46 AM
To: Gauthier, Dave
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] need clean way to copy col vals from one rec to another

On 10 Feb 2010, at 17:28, Gauthier, Dave wrote:

 create table foo (name text, company text,  job text);
 insert into foo (name,company,job) values ('joe','ge','engineer');
 insert into foo (name) values ('sue');
  
 What I want to do is map joe's company and job over to the sue record, ending 
 up with
 'sue' 'ge' 'engineer'
  
 Is there a quick/clever.efficient way to do this?

UPDATE foo SET company = joe.company, job = joe.job
  FROM foo AS joe
 WHERE foo.name = 'sue'
   AND joe.name = 'joe';

You could also do this on insert by using:

INSERT INTO foo (name, company, job)
SELECT 'sue', joe.company, joe.job
  FROM foo AS joe
 WHERE joe.name = 'joe';

Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:1015,4b72e2dc10441976818836!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Logging statement/duration on the same line

2010-02-10 Thread Baron Schwartz
I'm writing a log parser front-end.  I've seen some log samples that
look like this, with the duration and statement on the same line:

LOG:  duration: 1.565 ms  statement: SELECT * FROM users WHERE user_id='692'

But in my 8.3.9 test installation, durations are always logged on a
separate line.  Is the sample above from a different version?  Or is
there a way to get this output with different configuration?  Here's
the type of output that I see in 8.3.9:

2010-02-08 15:31:50.872 EST LOG:  statement: select 1;
2010-02-08 15:31:50.881 EST LOG:  duration: 10.870 ms

-- 
Baron Schwartz
Percona Inc: Services and Support for MySQL
http://www.percona.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Logging statement/duration on the same line

2010-02-10 Thread hubert depesz lubaczewski
On Wed, Feb 10, 2010 at 12:55:03PM -0500, Baron Schwartz wrote:
 I'm writing a log parser front-end.  I've seen some log samples that
 look like this, with the duration and statement on the same line:
 
 LOG:  duration: 1.565 ms  statement: SELECT * FROM users WHERE user_id='692'
 
 But in my 8.3.9 test installation, durations are always logged on a
 separate line.  Is the sample above from a different version?  Or is
 there a way to get this output with different configuration?  Here's
 the type of output that I see in 8.3.9:
 
 2010-02-08 15:31:50.872 EST LOG:  statement: select 1;
 2010-02-08 15:31:50.881 EST LOG:  duration: 10.870 ms

turn off log_Statement and log_duration. instead set to 0
log_min_duration_statement.

depesz

-- 
Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Logging statement/duration on the same line

2010-02-10 Thread Tom Lane
Baron Schwartz ba...@xaprb.com writes:
 I'm writing a log parser front-end.  I've seen some log samples that
 look like this, with the duration and statement on the same line:

 LOG:  duration: 1.565 ms  statement: SELECT * FROM users WHERE user_id='692'

 But in my 8.3.9 test installation, durations are always logged on a
 separate line.  Is the sample above from a different version?  Or is
 there a way to get this output with different configuration?

If the statement text was already printed due to log_statement,
duration logging doesn't repeat it.  So if you prefer that type of
display, turn off log_statement and instead set
log_min_duration_statement = 0 to log everything via duration logging.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory Usage and OpenBSD

2010-02-10 Thread Jeff Ross

Tom Lane wrote:

Martijn van Oosterhout klep...@svana.org writes:
  

On Tue, Feb 09, 2010 at 08:19:51PM +0500, Anton Maksimenkov wrote:


Can anybody briefly explain me how one postgres process allocate
memory for it needs?
  


  

There's no real maximum, as it depends on the exact usage. However, in
general postgres tries to keep below the values in work_mem and
maintainence_workmem. Most of the allocations are quite small, but
postgresql has an internal allocator which means that the system only
sees relatively large allocations. The majority will be in the order of
tens of kilobytes I suspect.



IIRC, the complaint that started this thread was about a VACUUM command
failing.  Plain VACUUM will in fact start out by trying to acquire a
single chunk of size maintenance_work_mem.  (On a small table it might
not be so greedy, but on a large table it will do that.)  So you
probably shouldn't ever try to set that value as large as 1GB if you're
working in a 32-bit address space.  You could maybe do it if you've kept
shared_buffers small, but that seems like the wrong performance tradeoff
in most cases ...

regards, tom lane

  


That would have been my original message.

I've been running a series of pgbench test on an i386 dual processor 
XEON server with 4G of ram and a RAID10 disk on a LSI MegaRAIDw/BBU 
controller.  I fixed the original problem by re-enabling better 
login.conf values for the postgresql user.


I ran the pgtune wizard and started with the settings I got from that.  
On i386 OpenBSD the recommended settings are far too large and cause  a 
kernel panic in short order.  Here are the settings that pgtune gives 
for -T web and -c 200:


maintenance_work_mem = 240MB # pgtune wizard 2010-02-10
effective_cache_size = 2816MB # pgtune wizard 2010-02-10
work_mem = 18MB # pgtune wizard 2010-02-10
wal_buffers = 4MB # pgtune wizard 2010-02-10
checkpoint_segments = 8 # pgtune wizard 2010-02-10
shared_buffers = 960MB # pgtune wizard 2010-02-10
max_connections = 200 # pgtune wizard 2010-02-10

I've been whittling that back and have got down to this:

maintenance_work_mem = 240MB # pgtune wizard 2010-01-27
checkpoint_completion_target = 0.7 # pgtune wizard 2010-01-27
effective_cache_size = 2816MB # pgtune wizard 2010-01-27
work_mem = 18MB # pgtune wizard 2010-01-27
wal_buffers = 4MB # pgtune wizard 2010-01-27
checkpoint_segments = 8 # pgtune wizard 2010-01-27
full_page_writes = off
synchronous_commit = off  
max_connections = 100

shared_buffers = 250MB # pgtune wizard 2010-01-27
work_mem = 64MB
temp_buffers = 32MB
checkpoint_segments = 32

Additionally, in OpenBSD's sysctl.conf I have this set:
kern.maxproc=10240
kern.maxfiles=20480

kern.shminfo.shmseg=32
kern.seminfo.semmni=256
kern.seminfo.semmns=2048
kern.shminfo.shmmax=283115520
kern.maxvnodes=6000
kern.bufcachepercent=70

The kern.shminfo.shmmax value is just enought to let postgresql start.  
kern.bufcachepercent=70 matches the effective_cache_size value.


pgbench is run with this:
pgbench -h varley.openvistas.net -U _postgresql -t 2 -c $SCALE pgbench
with scale starting at 10 and then incrementing by 10.  I call it three 
times for each scale.  I've turned on logging to 'all' to try and help 
figure out where the system panics, so that may lower the TPS somewhat 
but I have not been very favorably impressed with the speed of these 
U320 15K disks in RAID10 yet.


Scale 10: 


tps = 644.152616 (including connections establishing)
tps = 644.323919 (excluding connections establishing)

tps = 644.032366 (including connections establishing)
tps = 644.219732 (excluding connections establishing)

tps = 659.320222 (including connections establishing)
tps = 659.506025 (excluding connections establishing)

Scale 20:

tps = 643.830650 (including connections establishing)
tps = 644.001003 (excluding connections establishing)

tps = 631.357346 (including connections establishing)
tps = 631.538591 (excluding connections establishing)

tps = 629.035682 (including connections establishing)
tps = 629.245788 (excluding connections establishing)

Scale 30:

tps = 571.640243 (including connections establishing)
tps = 571.777080 (excluding connections establishing)

tps = 565.742963 (including connections establishing)
tps = 565.74 (excluding connections establishing)

tps = 564.058710 (including connections establishing)
tps = 564.203138 (excluding connections establishing)

Scale 40:

tps = 525.018290 (including connections establishing)
tps = 525.132745 (excluding connections establishing)

tps = 515.277398 (including connections establishing)
tps = 515.419313 (excluding connections establishing)


tps = 513.006317 (including connections establishing)
tps = 513.129971 (excluding connections establishing)

Scale 50:

tps = 468.323275 (including connections establishing)
tps = 468.415751 (excluding connections establishing)

tps = 453.100701 (including connections establishing)
tps = 453.201980 (excluding connections 

Re: [GENERAL] Logging statement/duration on the same line

2010-02-10 Thread Baron Schwartz
Thanks Tom, Depesz,

On Wed, Feb 10, 2010 at 1:15 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 turn off log_statement and instead set
 log_min_duration_statement = 0 to log everything via duration logging.

That does the trick.  Time to write more test cases.

Thanks
Baron

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [PHP] PHP and PostgreSQL boolean data type

2010-02-10 Thread Chris

Thom Brown wrote:

Hi,

A long-standing problem we've had with PostgreSQL queries in PHP is
that the returned data for boolean columns is the string 'f' instead
of the native boolean value of false.

An obvious example of this would be for a table with users and their
boolean registered status:

Select user, registered From users;

Then getting a row from the result would reveal: array('user' =
'thomb', registered = 'f');


That's how postgres stores them, php doesn't understand the field is a 
boolean.


# create table a(a int, b boolean);
# insert into a(a, b) values (1, true);
# insert into a(a, b) values (2, false);
# SELECT * from a;
 a | b
---+---
 1 | t
 2 | f
(2 rows)

Also while not in the official docs, it is a note from 2002:

http://www.php.net/manual/en/ref.pgsql.php#18749

and

http://www.php.net/manual/en/function.pg-fetch-array.php says

Each value in the array is represented as a string. Database NULL 
values are returned as NULL.




Another problem is with arrays, where they are difficult to parse as
they also come through as plain strings with no binary alternative.


Haven't played with postgres arrays so can't say either way - but same 
as above, php just fetches the data.


There's an example that might help you - 
http://www.php.net/manual/en/ref.pgsql.php#89841


--
Postgresql  php tutorials
http://www.designmagick.com/


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] windows7 login- user account

2010-02-10 Thread paul e

Before Installed postgresql Windows7 went straight to my user account.
Now when it boots I have to go to a selection page where I choose between my 
user account and a postgresql user account. Is there any way to bypass this so 
it boots directly to my user account? 
_
Your E-mail and More On-the-Go. Get Windows Live Hotmail Free.
http://clk.atdmt.com/GBL/go/201469229/direct/01/

[GENERAL] How do I drop a Complicated Constraint Trigger After Delete Execute Procedure?

2010-02-10 Thread Wang, Mary Y
Hi,

Hmm. Things are still getting interesting around here.

Here is my complicated problem.  I tried to delete a user from my users table, 
but it said
ERROR:  bug_assigned_to_fk referential integrity violation - key in users 
still referenced from bug
Ok.
Then I saw this statement in the .sql file.
CREATE CONSTRAINT TRIGGER bug_assigned_to_fk AFTER DELETE ON users  FROM 
bug NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE 
RI_FKey_noaction_del ('bug_assigned_to_fk', 'bug', 'users', 'FULL', 
'assigned_to', 'user_id');
Then I used this command to delete the constraint trigger:
 drop trigger bug_assign_to_fk on bug;
I received error:
ERROR:  DropTrigger: there is no trigger bug_assign_to_fk on relation bug
I also tried
drop trigger bug_assign_to_fk on user;
I received this error:
ERROR:  DropTrigger: there is no trigger bug_assign_to_fk on relation users

Here is the bug table.  Please NOTE there is no constraint listed in the bug 
table.

\d bug
  Table bug
   Attribute   |  Type   |   Modifier
---+-+--
 bug_id| integer | not null default nextval('bug_pk_seq'::text)
 group_id  | integer | not null default '0'
 status_id | integer | not null default '0'
 priority  | integer | not null default '0'
 category_id   | integer | not null default '0'
 submitted_by  | integer | not null default '0'
 assigned_to   | integer | not null default '0'
 date  | integer | not null default '0'
 summary   | text|
 details   | text|
 close_date| integer |
 bug_group_id  | integer | not null default '0'
 resolution_id | integer | not null default '0'
Indices: bug_group_id,
 bug_groupid_assignedto_statusid,
 bug_groupid_statusid,
 bug_pkey

Any ideas on how can I drop the bug_assigned_to_fk trigger so that I can remove 
an user?

Any help is appreciated.
Mary



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How do I drop a Complicated Constraint Trigger After Delete Execute Procedure?

2010-02-10 Thread Adrian Klaver
On Wednesday 10 February 2010 4:56:21 pm Wang, Mary Y wrote:
 Hi,

 Hmm. Things are still getting interesting around here.

 Here is my complicated problem.  I tried to delete a user from my users
 table, but it said ERROR:  bug_assigned_to_fk referential integrity
 violation - key in users still referenced from bug Ok.

Why not try deleting the information in bug that had assigned_to=user_id? Or do 
you want to keep that info in bug?

Is this still in version 7.1?

 Then I saw this statement in the .sql file.
 CREATE CONSTRAINT TRIGGER bug_assigned_to_fk AFTER DELETE ON users 
 FROM bug NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE
 PROCEDURE RI_FKey_noaction_del ('bug_assigned_to_fk', 'bug', 'users',
 'FULL', 'assigned_to', 'user_id'); Then I used this command to delete the
 constraint trigger:
  drop trigger bug_assign_to_fk on bug;
 I received error:
 ERROR:  DropTrigger: there is no trigger bug_assign_to_fk on relation bug
 I also tried
 drop trigger bug_assign_to_fk on user;
 I received this error:
 ERROR:  DropTrigger: there is no trigger bug_assign_to_fk on relation
 users

 Here is the bug table.  Please NOTE there is no constraint listed in the
 bug table.

 \d bug

How about \d+ bug ? 


 Mary



-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How do I drop a Complicated Constraint Trigger After Delete Execute Procedure?

2010-02-10 Thread Wang, Mary Y
Yes.  I do want to keep that information in the bug : assigned_to=user_id
Yes. I'm embrassed to say it's still in 7.1.3. I know, I know that I need to 
upgrade, but I do need to fix a problem now.
\d+ bug returns
 Table bug
   Attribute   |  Type   |   Modifier   | 
Descript  ion
---+-+--+-
  
 bug_id| integer | not null default nextval('bug_pk_seq'::text) |
 group_id  | integer | not null default '0' |
 status_id | integer | not null default '0' |
 priority  | integer | not null default '0' |
 category_id   | integer | not null default '0' |
 submitted_by  | integer | not null default '0' |
 assigned_to   | integer | not null default '0' |
 date  | integer | not null default '0' |
 summary   | text|  |
 details   | text|  |
 close_date| integer |  |
 bug_group_id  | integer | not null default '0' |
 resolution_id | integer | not null default '0' |
Indices: bug_group_id,
 bug_groupid_assignedto_statusid,
 bug_groupid_statusid,
 bug_pkey


-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@gmail.com] 
Sent: Wednesday, February 10, 2010 5:10 PM
To: pgsql-general@postgresql.org
Cc: Wang, Mary Y
Subject: Re: [GENERAL] How do I drop a Complicated Constraint Trigger After 
Delete Execute Procedure?

On Wednesday 10 February 2010 4:56:21 pm Wang, Mary Y wrote:
 Hi,

 Hmm. Things are still getting interesting around here.

 Here is my complicated problem.  I tried to delete a user from my 
 users table, but it said ERROR:  bug_assigned_to_fk referential 
 integrity violation - key in users still referenced from bug Ok.

Why not try deleting the information in bug that had assigned_to=user_id? Or do 
you want to keep that info in bug?

Is this still in version 7.1?

 Then I saw this statement in the .sql file.
 CREATE CONSTRAINT TRIGGER bug_assigned_to_fk AFTER DELETE ON users 
 FROM bug NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE 
 PROCEDURE RI_FKey_noaction_del ('bug_assigned_to_fk', 'bug', 
 'users', 'FULL', 'assigned_to', 'user_id'); Then I used this command 
 to delete the constraint trigger:
  drop trigger bug_assign_to_fk on bug;
 I received error:
 ERROR:  DropTrigger: there is no trigger bug_assign_to_fk on relation bug
 I also tried
 drop trigger bug_assign_to_fk on user;
 I received this error:
 ERROR:  DropTrigger: there is no trigger bug_assign_to_fk on relation 
 users

 Here is the bug table.  Please NOTE there is no constraint listed in 
 the bug table.

 \d bug

How about \d+ bug ? 


 Mary



--
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with pg_compresslog'd archives

2010-02-10 Thread Koichi Suzuki
I found it's pg_compresslog problem (calculation of XNOOP record
length used in pg_decompresslog).I'm fixing the bug and will
upload the fix shortly.

Sorry for inconvenience.

--
Koichi Suzuki

2010/2/8 Karl Denninger k...@denninger.net:
 This may belong in a bug report, but I'll post it here first...

 There appears to be a **SERIOUS** problem with using pg_compresslog and
 pg_uncompresslog with Postgresql 8.4.2.

 Here's my configuration snippet:

 full_page_writes = on                   # recover from partial page writes
 wal_buffers = 256kB                     # min 32kB
                                        # (change requires restart)
 #wal_writer_delay = 200ms               # 1-1 milliseconds

 #commit_delay = 0                       # range 0-10, in microseconds
 #commit_siblings = 5                    # range 1-1000

 # - Checkpoints -

 checkpoint_segments = 64                # in logfile segments, min 1,
 16MB each
 #checkpoint_timeout = 5min              # range 30s-1h
 checkpoint_completion_target = 0.9      # checkpoint target duration,
 0.0 - 1.0
 #checkpoint_warning = 30s               # 0 disables

 archive_command = 'test ! -f /dbms/pg_archive/%f.bz2  pg_compresslog
 %p | bzip2 - /dbms/pg_archive/%f.bz2'           #command to use to
 archive a logfile segment

 All appears to be fine with the writes, and they are being saved off on
 the nightly backups without incident.

 I take a full dump using the instructions in the documentation and make
 sure I copy the proper must have file for consistency to be reached.

 The problem comes when I try to restore.

 recovery_conf contains:

 restore_command = '/usr/local/pgsql/recovery.sh %f %p'

 And that file contains:


 #! /bin/sh

 infile=$1
 outfile=$2

 if test -f /dbms/pg_archive/$infile.bz2
 then
        bunzip2 -c /dbms/pg_archive/$infile.bz2 |
 /usr/local/pgsql/bin/pg_decompresslog - $outfile
        exit 0
 else
        exit 1
 fi

 ==

 The problem is that it appears that some of the segments being saved are
 no good!  On occasion I get this when trying to restore...

 Feb  7 12:43:51 dbms2 postgres[2001]: [210-1] LOG:  restored log file
 00010171009A from archive
 Feb  7 12:43:52 dbms2 postgres[2001]: [211-1] LOG:  restored log file
 00010171009B from archive
 Feb  7 12:43:52 dbms2 postgres[2001]: [212-1] LOG:  restored log file
 00010171009C from archive
 Feb  7 12:43:52 dbms2 postgres[2001]: [213-1] LOG:  restored log file
 00010171009D from archive
 Feb  7 12:43:53 dbms2 postgres[2001]: [214-1] LOG:  restored log file
 00010171009E from archive
 Feb  7 12:43:53 dbms2 postgres[2001]: [215-1] LOG:  restored log file
 00010171009F from archive
 Feb  7 12:43:54 dbms2 postgres[2001]: [216-1] LOG:  restored log file
 0001017100A0 from archive
 Feb  7 12:43:54 dbms2 postgres[2001]: [217-1] LOG:  restored log file
 0001017100A1 from archive
 Feb  7 12:43:55 dbms2 postgres[2001]: [218-1] LOG:  restored log file
 0001017100A2 from archive
 Feb  7 12:43:55 dbms2 postgres[2001]: [219-1] LOG:  restored log file
 0001017100A3 from archive
 Feb  7 12:43:56 dbms2 postgres[2001]: [220-1] LOG:  restored log file
 0001017100A4 from archive
 Feb  7 12:43:56 dbms2 postgres[2001]: [221-1] LOG:  restored log file
 0001017100A5 from archive
 Feb  7 12:43:57 dbms2 postgres[2001]: [222-1] LOG:  restored log file
 0001017100A6 from archive
 Feb  7 12:43:57 dbms2 postgres[2001]: [223-1] PANIC:  corrupted page
 pointers: lower = 772, upper = 616, special = 0
 Feb  7 12:43:57 dbms2 postgres[2001]: [223-2] CONTEXT:  xlog redo
 hot_update: rel 1663/616245/1193269; tid 53/93; new 53/4
 Feb  7 12:43:57 dbms2 postgres[2000]: [1-1] LOG:  startup process (PID
 2001) was terminated by signal 6: Abort trap
 Feb  7 12:43:57 dbms2 postgres[2000]: [2-1] LOG:  terminating any other
 active server processes

 Eek.

 I assume this means that either A6 or A7 is corrupt.  But I have the
 file both in the restore AND ON THE MACHINE WHERE IT ORIGINATED:

 On the SOURCE machine (which is running just fine):
 tickerforum# cksum *171*A[67]*
 172998591 830621 0001017100A6.bz2
 1283345296 1541006 0001017100A7.bz2

 And off the BACKUP archive, which is what I'm trying to restore:

 # cksum *171*A[67]*
 172998591 830621 0001017100A6.bz2
 1283345296 1541006 0001017100A7.bz2

 Identical, says the checksums.

 This is VERY BAD - if pg_compresslog is damaging the files in some
 instances then ANY BACKUP TAKEN USING THEM IS SUSPECT AND MAY NOT
 RESTORE!!

 Needless to say this is a MAJOR problem.

 -- Karl Denninger



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general





-- 
--
Koichi Suzuki

-- 
Sent via pgsql-general mailing list 

Re: [GENERAL] How do I drop a Complicated Constraint Trigger After Delete Execute Procedure?

2010-02-10 Thread Tom Lane
Wang, Mary Y mary.y.w...@boeing.com writes:
 Here is my complicated problem.  I tried to delete a user from my users 
 table, but it said
 ERROR:  bug_assigned_to_fk referential integrity violation - key in users 
 still referenced from bug
 Ok.
 Then I saw this statement in the .sql file.
 CREATE CONSTRAINT TRIGGER bug_assigned_to_fk AFTER DELETE ON users  FROM 
 bug NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE 
 RI_FKey_noaction_del ('bug_assigned_to_fk', 'bug', 'users', 'FULL', 
 'assigned_to', 'user_id');
 Then I used this command to delete the constraint trigger:
  drop trigger bug_assign_to_fk on bug;
 I received error:
 ERROR:  DropTrigger: there is no trigger bug_assign_to_fk on relation bug

It looks to me like you misspelled the trigger name --- what you
quote there is bug_assigned_to_fk not bug_assign_to_fk.  Also, the
trigger is attached to table users not table bug.

 Here is the bug table.  Please NOTE there is no constraint listed in the bug 
 table.

I think in 7.1 that trigger would be shown as a trigger if you did \d users,
but it's not going to be mentioned by \d bug.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How do I drop a Complicated Constraint Trigger After Delete Execute Procedure?

2010-02-10 Thread Wang, Mary Y
Ok.  I typed the correct name this time, and got the same error.
drop trigger bug_assigned_to_fk on users;
ERROR:  DropTrigger: there is no trigger bug_assigned_to_fk on relation users 
drop trigger bug_assigned_to_fk on bug;
ERROR:  DropTrigger: there is no trigger bug_assigned_to_fk on relation bug
Here is my user table:
\d users
 Table users
  Attribute   | Type  |Modifier

--+---+---
-
 user_id  | integer   | not null default 
nextval('users_pk
_seq'::text)
 user_name| text  | not null default ''
 email| text  | not null default ''
 user_pw  | character varying(32) | not null default ''
 realname | character varying(32) | not null default ''
 status   | character(1)  | not null default 'A'
 shell| character varying(20) | not null default '/bin/bash'
 unix_pw  | character varying(40) | not null default ''
 unix_status  | character(1)  | not null default 'N'
 unix_uid | integer   | not null default '0'
 unix_box | character varying(10) | not null default 'shell1'
 add_date | integer   | not null default '0'
 confirm_hash | character varying(32) |
 mail_siteupdates | integer   | not null default '0'
 mail_va  | integer   | not null default '0'
 authorized_keys  | text  |
 email_new| text  |
 people_view_skills   | integer   | not null default '0'
 people_resume| text  | not null default ''
 timezone | character varying(64) | default 'GMT'
 language | integer   | not null default '1'
 third_party  | integer   | not null default 1
 personal_status  | character(32) |
 bemsid   | integer   |
 sensitive_info   | character(64) |
 reason_access| text  |
 organization | text  |
 brass_first_time | character(1)  | default '0'
 mail_sitenews_update | integer   | default '0'
 doclinks_sort_order  | character(1)  | default 'A'
Indices: idx_users_username,
 user_user,
 users_user_pw

Someone mentioned about using 'alter table'.  Would like would work?  But I'm 
not sure how to do it because \d doesn't show the constraint.

Any ideas?
Mary

-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us] 
Sent: Wednesday, February 10, 2010 6:30 PM
To: Wang, Mary Y
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] How do I drop a Complicated Constraint Trigger After 
Delete Execute Procedure? 

Wang, Mary Y mary.y.w...@boeing.com writes:
 Here is my complicated problem.  I tried to delete a user from my 
 users table, but it said
 ERROR:  bug_assigned_to_fk referential integrity violation - key in users 
 still referenced from bug
 Ok.
 Then I saw this statement in the .sql file.
 CREATE CONSTRAINT TRIGGER bug_assigned_to_fk AFTER DELETE ON users  FROM 
 bug NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE 
 RI_FKey_noaction_del ('bug_assigned_to_fk', 'bug', 'users', 'FULL', 
 'assigned_to', 'user_id');
 Then I used this command to delete the constraint trigger:
  drop trigger bug_assign_to_fk on bug;
 I received error:
 ERROR:  DropTrigger: there is no trigger bug_assign_to_fk on relation bug

It looks to me like you misspelled the trigger name --- what you quote there is 
bug_assigned_to_fk not bug_assign_to_fk.  Also, the trigger is attached to 
table users not table bug.

 Here is the bug table.  Please NOTE there is no constraint listed in the bug 
 table.

I think in 7.1 that trigger would be shown as a trigger if you did \d users, 
but it's not going to be mentioned by \d bug.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How do I drop a Complicated Constraint Trigger After Delete Execute Procedure?

2010-02-10 Thread Adrian Klaver
On Wednesday 10 February 2010 7:07:08 pm Wang, Mary Y wrote:
 Ok.  I typed the correct name this time, and got the same error.
 drop trigger bug_assigned_to_fk on users;
 ERROR:  DropTrigger: there is no trigger bug_assigned_to_fk on relation
 users  drop trigger bug_assigned_to_fk on bug;
 ERROR:  DropTrigger: there is no trigger bug_assigned_to_fk on relation
 bug Here is my user table:
 \d users
  Table users
   Attribute   | Type  |Modifier

 --+---+
--- -
  user_id  | integer   | not null default
 nextval('users_pk _seq'::text)
  user_name| text  | not null default ''
  email| text  | not null default ''
  user_pw  | character varying(32) | not null default ''
  realname | character varying(32) | not null default ''
  status   | character(1)  | not null default 'A'
  shell| character varying(20) | not null default
 '/bin/bash' unix_pw  | character varying(40) | not null default
 '' unix_status  | character(1)  | not null default 'N'
 unix_uid | integer   | not null default '0'
 unix_box | character varying(10) | not null default 'shell1'
 add_date | integer   | not null default '0'
 confirm_hash | character varying(32) |
  mail_siteupdates | integer   | not null default '0'
  mail_va  | integer   | not null default '0'
  authorized_keys  | text  |
  email_new| text  |
  people_view_skills   | integer   | not null default '0'
  people_resume| text  | not null default ''
  timezone | character varying(64) | default 'GMT'
  language | integer   | not null default '1'
  third_party  | integer   | not null default 1
  personal_status  | character(32) |
  bemsid   | integer   |
  sensitive_info   | character(64) |
  reason_access| text  |
  organization | text  |
  brass_first_time | character(1)  | default '0'
  mail_sitenews_update | integer   | default '0'
  doclinks_sort_order  | character(1)  | default 'A'
 Indices: idx_users_username,
  user_user,
  users_user_pw

 Someone mentioned about using 'alter table'.  Would like would work?  But
 I'm not sure how to do it because \d doesn't show the constraint.

 Any ideas?
 Mary




For what it is worth the manuals for this version are here:
http://www.postgresql.org/docs/manuals/archive.html

I do not see anything in the manual that shows ALTER TABLE being useful in this 
situation. I am afraid 7.1 is before my time and at this point I cannot think 
of a solution other than set the assigned_to value in bugs to NULL where 
assigned_to=user_id. Sort of negates the point of a relationship between bugs 
and users. Another option would be to create a 'dummy' user to 
whom 'unassigned ' bugs would be referenced.

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How do I drop a Complicated Constraint Trigger After Delete Execute Procedure?

2010-02-10 Thread Tom Lane
Wang, Mary Y mary.y.w...@boeing.com writes:
 Ok.  I typed the correct name this time, and got the same error.
 drop trigger bug_assigned_to_fk on users;
 ERROR:  DropTrigger: there is no trigger bug_assigned_to_fk on relation users 
 
 drop trigger bug_assigned_to_fk on bug;
 ERROR:  DropTrigger: there is no trigger bug_assigned_to_fk on relation bug

Huh.  Do you get anything from
select * from pg_trigger where tgname = 'bug_assigned_to_fk';
or
select * from pg_trigger where tgconstrname = 'bug_assigned_to_fk';
?  If I recall the 7.1 code at all, it has to be printing one or the
other of those fields as the name shown in the FK error message ...

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How do I drop a Complicated Constraint Trigger After Delete Execute Procedure?

2010-02-10 Thread Wang, Mary Y
Wow!! Good memory. 
I didn't get anything with the 'select * from pg_trigger where tgname = 
'bug_assigned_to_fk';' but
I got something here :
 select * from pg_trigger where tgconstrname = 'bug_assigned_to_fk';
 tgrelid |tgname| tgfoid | tgtype | tgenabled | 
tgisconstr  aint |tgconstrname| tgconstrrelid | 
tgdeferrable | tginitdeferred | tgnarg  s | tgattr |
  tgargs
-+--+++---+---
  
-++---+--++---
  
--++--
  
 7335118 | RI_ConstraintTrigger_9217018 |   1655 | 17 | t | t   
 | bug_assigned_to_fk |   7329978 | f| 
f  | 6 || 
bug_assigned_to_fk\000bug\000users\000FULL\000assigned_to\000user_id\   
   000
 7335118 | RI_ConstraintTrigger_9217016 |   1654 |  9 | t | t   
 | bug_assigned_to_fk |   7329978 | f| 
f  | 6 || 
bug_assigned_to_fk\000bug\000users\000FULL\000assigned_to\000user_id\   
   000
 7329978 | RI_ConstraintTrigger_9217014 |   1644 | 21 | t | t   
 | bug_assigned_to_fk |   7335118 | f| 
f  | 6 || 
bug_assigned_to_fk\000bug\000users\000FULL\000assigned_to\000user_id\   
   000
(3 rows)

What do I do with them?

Mary Y Wang

-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us] 
Sent: Wednesday, February 10, 2010 8:02 PM
To: Wang, Mary Y
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] How do I drop a Complicated Constraint Trigger After 
Delete Execute Procedure? 

Wang, Mary Y mary.y.w...@boeing.com writes:
 Ok.  I typed the correct name this time, and got the same error.
 drop trigger bug_assigned_to_fk on users;
 ERROR:  DropTrigger: there is no trigger bug_assigned_to_fk on relation users 
 
 drop trigger bug_assigned_to_fk on bug;
 ERROR:  DropTrigger: there is no trigger bug_assigned_to_fk on relation bug

Huh.  Do you get anything from
select * from pg_trigger where tgname = 'bug_assigned_to_fk'; or
select * from pg_trigger where tgconstrname = 'bug_assigned_to_fk'; ?  
If I recall the 7.1 code at all, it has to be printing one or the other of 
those fields as the name shown in the FK error message ...

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgres Triggers issue

2010-02-10 Thread u235sentinel
I have a strange problem we noticed the other day with triggers.  We're 
running 8.3.3 on Solaris 10 (intel) and have a feed that comes in 
regularly to populate a table we're working on.  The feed works just 
fine inserting rows however the following trigger stops the feed until 
we remove the trigger.  Any thoughts on what I'm doing wrong here?


Thanks!

---

CREATE OR REPLACE FUNCTION r.m_t()
RETURNS trigger AS
$BODY$
BEGIN
INSERT INTO temp_m_t VALUES (NEW.*,1+1);
RETURN NULL;
END;
$BODY$
LANGUAGE 'plpgsql';


CREATE TRIGGER tafter
AFTER INSERT OR UPDATE
ON r.m_a
FOR EACH ROW
EXECUTE PROCEDURE r.m_t();


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How do I drop a Complicated Constraint Trigger After Delete Execute Procedure?

2010-02-10 Thread Wang, Mary Y
I guess I could just use
DELETE FROM pg_trigger WHERE tgname = RI_ConstraintTrigger_9217018
I just wanted to make sure that I don't mess up the system table.

Mary

-Original Message-
From: Wang, Mary Y 
Sent: Wednesday, February 10, 2010 8:10 PM
To: 'Tom Lane'
Cc: pgsql-general@postgresql.org
Subject: RE: [GENERAL] How do I drop a Complicated Constraint Trigger After 
Delete Execute Procedure? 

Wow!! Good memory. 
I didn't get anything with the 'select * from pg_trigger where tgname = 
'bug_assigned_to_fk';' but I got something here :
 select * from pg_trigger where tgconstrname = 'bug_assigned_to_fk';
 tgrelid |tgname| tgfoid | tgtype | tgenabled | 
tgisconstr  aint |tgconstrname| tgconstrrelid | 
tgdeferrable | tginitdeferred | tgnarg  s | tgattr |
  tgargs
-+--+++---+---
  
-++---+--++---
  
--++--
  
 7335118 | RI_ConstraintTrigger_9217018 |   1655 | 17 | t | t   
 | bug_assigned_to_fk |   7329978 | f| 
f  | 6 || 
bug_assigned_to_fk\000bug\000users\000FULL\000assigned_to\000user_id\   
   000
 7335118 | RI_ConstraintTrigger_9217016 |   1654 |  9 | t | t   
 | bug_assigned_to_fk |   7329978 | f| 
f  | 6 || 
bug_assigned_to_fk\000bug\000users\000FULL\000assigned_to\000user_id\   
   000
 7329978 | RI_ConstraintTrigger_9217014 |   1644 | 21 | t | t   
 | bug_assigned_to_fk |   7335118 | f| 
f  | 6 || 
bug_assigned_to_fk\000bug\000users\000FULL\000assigned_to\000user_id\   
   000
(3 rows)

What do I do with them?

Mary Y Wang

-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
Sent: Wednesday, February 10, 2010 8:02 PM
To: Wang, Mary Y
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] How do I drop a Complicated Constraint Trigger After 
Delete Execute Procedure? 

Wang, Mary Y mary.y.w...@boeing.com writes:
 Ok.  I typed the correct name this time, and got the same error.
 drop trigger bug_assigned_to_fk on users;
 ERROR:  DropTrigger: there is no trigger bug_assigned_to_fk on relation users 
 
 drop trigger bug_assigned_to_fk on bug;
 ERROR:  DropTrigger: there is no trigger bug_assigned_to_fk on relation bug

Huh.  Do you get anything from
select * from pg_trigger where tgname = 'bug_assigned_to_fk'; or
select * from pg_trigger where tgconstrname = 'bug_assigned_to_fk'; ?  
If I recall the 7.1 code at all, it has to be printing one or the other of 
those fields as the name shown in the FK error message ...

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How do I drop a Complicated Constraint Trigger After Delete Execute Procedure?

2010-02-10 Thread Tom Lane
Wang, Mary Y mary.y.w...@boeing.com writes:
 I got something here :
  select * from pg_trigger where tgconstrname = 'bug_assigned_to_fk';
  tgrelid |tgname| tgfoid | tgtype | tgenabled | 
 tgisconstr  aint |tgconstrname| tgconstrrelid | 
 tgdeferrable | tginitdeferred | tgnarg  s | tgattr |  
 tgargs
 -+--+++---+---
   
 -++---+--++---
   
 --++--
   
  7335118 | RI_ConstraintTrigger_9217018 |   1655 | 17 | t | t 
| bug_assigned_to_fk |   7329978 | f   
  | f  | 6 || 
 bug_assigned_to_fk\000bug\000users\000FULL\000assigned_to\000user_id\ 
  000
  7335118 | RI_ConstraintTrigger_9217016 |   1654 |  9 | t | t 
| bug_assigned_to_fk |   7329978 | f   
  | f  | 6 || 
 bug_assigned_to_fk\000bug\000users\000FULL\000assigned_to\000user_id\ 
  000
  7329978 | RI_ConstraintTrigger_9217014 |   1644 | 21 | t | t 
| bug_assigned_to_fk |   7335118 | f   
  | f  | 6 || 
 bug_assigned_to_fk\000bug\000users\000FULL\000assigned_to\000user_id\ 
  000
 (3 rows)

Ah, right ... if memory were better, I'd have remembered that FK
triggers used to be named like this.  Try

drop trigger RI_ConstraintTrigger_9217018 on bugs;

and so forth.  I'm not sure which of the three triggers are
on which of the two tables, but it won't take you long to
find out.  (And yes, you need those double quotes.)

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory Usage and OpenBSD

2010-02-10 Thread Greg Smith

Jeff Ross wrote:

pgbench is run with this:
pgbench -h varley.openvistas.net -U _postgresql -t 2 -c $SCALE pgbench
with scale starting at 10 and then incrementing by 10.  I call it 
three times for each scale.  I've turned on logging to 'all' to try 
and help figure out where the system panics, so that may lower the TPS 
somewhat but I have not been very favorably impressed with the speed 
of these U320 15K disks in RAID10 yet.


-c sets the number of clients active at once.  pgbench has a database 
scale option when you're initializing, -s, that sets how many records 
are in the tables, and therefore how large the database is.  If you 
don't set the scale to a larger number, so that -c  -s, you'll get 
bad performance results.  The way you're saying scale but changing the 
client numbers is a little confusing.


I can't comment how whether yours are good or bad numbers without 
knowing the actual database scale number.  When reporting a pgbench 
result, it's handy to include the complete output from one of the runs, 
just so people can see exactly what test was run.  After that you can 
just show the TPS values.  Showing the command used to initialize the 
pgbench database can also be helpful.



--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] questions about a table's row estimates

2010-02-10 Thread Greg Smith

Ben Chobot wrote:

I'm looking at pg_stat_user_tables in 8.4.2, and I'm confused about n_live_tup. 
Shouldn't that be at least fairly close to (n_tup_ins - n_tup-del)? It doesn't 
seem to be, but I'm unclear why.
  

Insert 2000 tuples.
Delete 1000 tuples.
vacuum
Insert 1000 tuples. These go into the free space the deleted tuples used 
to be in.

analyze

n_tup_ins=3000
n_tup_del=1000
n_live_tup=3000

If there's any delete/vacuum/reuse churn here, no reason the believe the 
insert/delete and live counts will be close at all.


--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] questions about a table's row estimates

2010-02-10 Thread Tom Lane
Greg Smith g...@2ndquadrant.com writes:
 Ben Chobot wrote:
 I'm looking at pg_stat_user_tables in 8.4.2, and I'm confused about 
 n_live_tup. Shouldn't that be at least fairly close to (n_tup_ins - 
 n_tup-del)? It doesn't seem to be, but I'm unclear why.
 
 Insert 2000 tuples.
 Delete 1000 tuples.
 vacuum
 Insert 1000 tuples. These go into the free space the deleted tuples used 
 to be in.
 analyze

 n_tup_ins=3000
 n_tup_del=1000
 n_live_tup=3000

Huh?

regression=# create table foo (f1 int);
CREATE TABLE
regression=# insert into foo select generate_series(1,2000);
INSERT 0 2000
regression=# select n_live_tup,n_tup_ins,n_tup_del from pg_stat_user_tables  
where relname = 'foo';
 n_live_tup | n_tup_ins | n_tup_del 
+---+---
   2000 |  2000 | 0
(1 row)

regression=# delete from foo where f1  1000;
DELETE 1000
regression=# select n_live_tup,n_tup_ins,n_tup_del from pg_stat_user_tables  
where relname = 'foo';
 n_live_tup | n_tup_ins | n_tup_del 
+---+---
   1000 |  2000 |  1000
(1 row)

regression=# insert into foo select generate_series(2001,3000);
INSERT 0 1000
regression=# select n_live_tup,n_tup_ins,n_tup_del from pg_stat_user_tables  
where relname = 'foo';
 n_live_tup | n_tup_ins | n_tup_del 
+---+---
   2000 |  3000 |  1000
(1 row)

regression=# 

The only easy explanation I can think of for Ben's complaint is if he
reset the stats counters sometime during the table's existence.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PHP and PostgreSQL boolean data type

2010-02-10 Thread Torsten Zühlsdorff

Thom Brown schrieb:


A long-standing problem we've had with PostgreSQL queries in PHP is
that the returned data for boolean columns is the string 'f' instead
of the native boolean value of false.


This problem is solved since nearly 5 years with PDO. You can use an
abstraction like DDDBL (see my signature) if you want to save time while
using PDO.

Greetings from Germany,
Torsten

--
http://www.dddbl.de - ein Datenbank-Layer, der die Arbeit mit 8 
verschiedenen Datenbanksystemen abstrahiert,
Queries von Applikationen trennt und automatisch die Query-Ergebnisse 
auswerten kann.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Extending SQL in C using VARIABLE length type

2010-02-10 Thread Carsten Kropf
Thanks a lot so far. I adopted my structures and am now storing two fields 
(v_len_ and dimensions) and the storage is now working properly. If I now would 
try to combine two of these points to a range (like cube) including an upper 
and a lower bound n-dimensional point structure, I don't get the point how to 
achieve this.
I tried around a little bit and ended up with a structure like the following:
/**
 * basic structure definition for a range containing an upper and a lower point 
(in multiple dimensions)
 */
struct Range
{
int32 v_len_;
/**
 * the upper limit in each dimension
 */
struct PointND * upper;
/**
 * the lower limit in each dimension
 */
struct PointND * lower;
};
However, the problem is again, how to put this range into a table. Actually, I 
don't know exactly, how to do this, I tried the following:
len = VARSIZE(upper) + VARSIZE(lower) + VARHDRSZ + 2 * 
sizeof(struct Point *);
result = (Range *) palloc0(len);
//  result-upper = upper;
//  result-lower = lower;
memcpy((void *) result-upper, (void *) upper, VARSIZE(upper));
memcpy((void *) result-lower, (void *) lower, VARSIZE(lower));

// set the var size
SET_VARSIZE(result, len);
But this didn't do the trick. I did not yet find sth in the code of postgres, 
how to build such a combined type, unfortunately (or I did not look at the 
right places until now). How would one do this?

Thanks in advance
regards
Carsten Kropf
Am 10.02.2010 um 12:20 schrieb Yeb Havinga:

 Carsten Kropf wrote:
 Oh, I see, does the VARSIZE length field have to be the total number of 
 bytes occupied (including VARHDRSZ and the size of the structure) or only 
 the size that is used by my datatype?
 Yes
 Then it would become pretty much obvious, why this is not supposed to work.
 I'll try it out then.
  
 My €0,02: rename the dimensions to vl_len_ to avoid confusion and get 
 compiler errors where you now use 'dimension'. Add a macro that converts a 
 pointnd structure to dimension int by taking the VARSIZE_ANY_EXHDR / 
 sizeof(float8) and use it where dimension is used now. Or if your database is 
 small you could keep dimension in the structure.
 
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general