Re: [GENERAL] Oracle to PostgreSQL

2017-11-09 Thread Felix Kunde
On Thu, 2017-11-09 at 17:01 +0530, Brahmam Eswar wrote:
> You can create a composite type in PostgreSQL:
> CREATE TYPE complex AS (r integer, i integer);
> You would use an array in this case:
> DECLARE
> carr complex[];

I've once faced the same thing, and did as Laurenz suggested.
You will like Postgres' array support.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trigger based logging alternative to table_log

2017-03-27 Thread Felix Kunde
> I have some code which uses table_log 
> (http://pgfoundry.org/projects/tablelog/) to keep a log of changes to 
> selected tables.  
> I don't use the restore part, just the logging part.  
> It creates a new table for each table being logged, with several additional 
> columns, and adds triggers to insert rows in the new table for changes in the 
> original.
> The problem is that table_log hasn't been maintained in nearly 10 years, and 
> pgfoundry itself seems to have one foot in the grave and one on a banana peel.
> There are several other systems out there which store the data in hstore or 
> json, which I would probably use if doing this from scratch.  But I'd rather 
> preserve the existing log tables than either throw away that data, or port it 
> over to a new format.
> Is there any better-maintained code out there which would be compatible with 
> the existing schema used by table_log?
 
>Cheers,
>Jeff

Afaik, there is no compatible solution. If tablelog works for you then keep it. 
Do you miss a feature or why do you worry about the unmaintained code base? I 
think, if there would be a problem with any new version that the developer 
would fix it. There is also an existing github repo 
(https://github.com/glynastill/table_log_pl). 

Recently, I have done a comparison of different audit tools to check how good 
my creation (pgMemento) works compared to the others. So I know how most of 
them work. tablelog for example logs both OLD and NEW. So you got all your data 
twice. Other solutions log either OLD of NEW. tablelog uses only one timestamp 
field whereas others using two (or a range).

As tablelog is using history tables with relational layout I would suggest to 
consider other extensions that do a similar thing. If you are interested in 
only logging the data you might check out the temporal_tables extension 
(http://pgxn.org/dist/temporal_tables/). In my test it had the least impact to 
write operations and disk consumption.

Using hstore or json for logging might sound cool in the first place, but it 
only has its benefits if you don't want to adapt the auditing behaviour to 
schema changes (like new columns etc.). With pgMemento I decided to go for 
jsonb but after many hours of programming complex restoring functions I can say 
that my only real argument of using it now, is that I only log values of 
changed fields. I like that but it makes the trigger overhead bigger.

Greetings from Berlin
Felix


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] utf8 issues

2015-02-25 Thread Felix Ivan Romero Rodríguez

I've got the followin issue with postgres 9.3, trying to restore a backup from 
sql script: 
invalid byte sequence for encoding “UTF8″: 0xc2de 
how can i solve it? 





Re: [GENERAL] Getting a delta of two json-typed objects? (a breeze w/ hstore)

2015-01-22 Thread Felix Kunde
Hi, not so long ago, I've asked myself the same question. I've written two 
fuction that do this:

SELECT build_json(array_agg(to_json(old.key)), array_agg(old.value)) 
FROM json_each($1) old
LEFT OUTER JOIN json_each($2) new ON old.key = new.key
WHERE old.value::text  new.value::text OR new.key IS NULL
HAVING array_agg(to_json(old.key)) IS NOT NULL
AND array_agg(old.value) IS NOT NULL;

and this:

CREATE OR REPLACE FUNCTION pgmemento.build_json(
json_keys ANYARRAY,
json_values ANYARRAY
) RETURNS JSON AS
$$
DECLARE
json_string TEXT := '{';
delimeter TEXT := '';
json_result JSON;
BEGIN
FOR i IN array_lower(json_keys, 1)..array_upper(json_keys, 1) LOOP
json_string := json_string || delimeter || json_keys[i] || ':' || 
json_values[i];
delimeter := ',';
END LOOP;
json_string := json_string || '}';
EXECUTE format('SELECT %L::json', json_string) INTO json_result;
RETURN json_result;
END
$$
LANGUAGE plpgsql;

Not the best way actually. I wonder, how I could benefit from the new 
build_json function in 9.4 json_build_object(VARIADIC any). Have to get my 
keys and values in alternating order... hm.

Then I've also found this nice example, which might do the things you are 
looking for:
http://schinckel.net/2014/05/25/querying-json-in-postgres/
 
Ahoi
Felix


Gesendet: Donnerstag, 22. Januar 2015 um 20:37 Uhr
Von: Wells Oliver wellsoli...@gmail.com
An: pgsql-general@postgresql.org pgsql-general@postgresql.org
Betreff: [GENERAL] Getting a delta of two json-typed objects? (a breeze w/ 
hstore)

Hey all. I have a trigger function which does a delta of two hstore values just 
doing a - b; this check is performed to see if there's a delta and if not I 
don't log it.
 
I'm wondering if there's a suitable method for comparison two json objects? I 
don't have 9.4 yet so I can't use jsonb, but if there's any input here I'd 
appreciate it.
 
Thanks.
 --
Wells Oliver
wellsoli...@gmail.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] table versioning approach (not auditing)

2014-10-01 Thread Felix Kunde
Hey there. Thank you very much for that fix! Thats why I'd like to have a joint 
development and joint testing. It's way more convincing for users to go for a 
solution that is tested by some experts than just by a random developer :) 

I'm open to create a new project and push the code there. Don't care about the 
name. Then we might figure out which parts are already good, which parts could 
be improved and where to go next. I think switching to JSONB for example will 
be easy, as it offers the same functions than JSON afaik.
 

Gesendet: Dienstag, 30. September 2014 um 21:16 Uhr
Von: Adam Brusselback adambrusselb...@gmail.com
An: Felix Kunde felix-ku...@gmx.de
Cc: pgsql-general@postgresql.org pgsql-general@postgresql.org
Betreff: Re: [GENERAL] table versioning approach (not auditing)

Felix, I'd love to see a single, well maintained project. For example, I just 
found yours, and gave it a shot today after seeing this post.  I found a bug 
when an update command is issued, but the old and new values are all the same.  
The trigger will blow up.  I've got a fix for that, but if we had one project 
that more than a handful of people used, stuff like that would be quashed very 
quickly.
 
I love the design of it by the way. Any idea what it will take to move to JSONB 
for 9.4? 
 
 
On Tue, Sep 30, 2014 at 7:22 AM, Felix Kunde felix-ku...@gmx.de wrote:Hey
 
yes i'm adding an additional key to each of my tables. First i wanted to use 
the primary key as one column in my audit_log table, but in some of my tables 
the PK consists of more than one column. Plus it's nice to have one key that is 
called the same over all tables.
 
To get a former state for one row at date x I need to join the latest delta 
BEFORE date x with each delta AFTER date x. If I would log complete rows, this 
joining part would not be neccessary, but as I usually work with spatial 
databases that have complex geometries and also image files, this strategy is 
too harddisk consuming.
 
If there are more users following a similar approach, I wonder why we not throw 
all the good ideas together, to have one solution that is tested, maintained 
and improved by more developpers. This would be great.
 
Felix
 

Gesendet: Montag, 29. September 2014 um 23:25 Uhr
Von: Abelard Hoffman abelardhoff...@gmail.com[abelardhoff...@gmail.com]
An: Felix Kunde felix-ku...@gmx.de[felix-ku...@gmx.de]
Cc: pgsql-general@postgresql.org[pgsql-general@postgresql.org] 
pgsql-general@postgresql.org[pgsql-general@postgresql.org]
Betreff: Re: [GENERAL] table versioning approach (not auditing)

Thank you Felix, Gavin, and Jonathan for your responses.
 
Felix  Jonathan: both of you mention just storing deltas. But if you do that, 
how do you associate the delta record with the original row? Where's the PK 
stored, if it wasn't part of the delta?
 
Felix, thank you very much for the example code. I took a look at your table 
schemas. I need to study it more, but it looks like the way you're handling the 
PK, is you're adding a separate synthethic key (audit_id) to each table that's 
being versioned. And then storing that key along with the delta.
 
So then to find all the versions of a given row, you just need to join the 
audit row with the schema_name.table_name.audit_id column. Is that right? The 
only potential drawback there is there's no referential integrity between the 
audit_log.audit_id and the actual table.
 
I do like that approach very much though, in that it eliminates the need to 
interrogate the json data in order to perform most queries.
 
AH
 
 
 
On Mon, Sep 29, 2014 at 12:26 AM, Felix Kunde 
felix-ku...@gmx.de[felix-ku...@gmx.de] wrote:Hey
 
i've also tried to implement a database versioning using JSON to log changes in 
tables. Here it is: 
https://github.com/fxku/audit[https://github.com/fxku/audit][https://github.com/fxku/audit[https://github.com/fxku/audit]]
I've got two versioning tables, one storing information about all transactions 
that happened and one where i put the JSON logs of row changes of each table. 
I'm only logging old values and not complete rows.
 
Then I got a function that recreates a database state at a given time into a 
separate schema - either to VIEWs, MVIEWs or TABLES. This database state could 
then be indexed in order to work with it. You can also reset the production 
state to the recreated past state.
 
Unfortunately I've got no time to further work on it at the moment + I have not 
done tests with many changes in the database so I can't say if the recreation 
process scales well. On downside I've realised is that using the json_agg 
function has limits when I've got binary data. It gets too long. So I'm really 
looking forward using JSONB.

There are more plans in my mind. By having a Transaction_Log table it should be 
possible to revert only certain transactions. I'm also thinking of parallel 
versioning, e.g. different users are all working with their version of the 
database and commit their changes

Re: [GENERAL] table versioning approach (not auditing)

2014-09-30 Thread Felix Kunde
Hey
 
yes i'm adding an additional key to each of my tables. First i wanted to use 
the primary key as one column in my audit_log table, but in some of my tables 
the PK consists of more than one column. Plus it's nice to have one key that is 
called the same over all tables.
 
To get a former state for one row at date x I need to join the latest delta 
BEFORE date x with each delta AFTER date x. If I would log complete rows, this 
joining part would not be neccessary, but as I usually work with spatial 
databases that have complex geometries and also image files, this strategy is 
too harddisk consuming.
 
If there are more users following a similar approach, I wonder why we not throw 
all the good ideas together, to have one solution that is tested, maintained 
and improved by more developpers. This would be great.
 
Felix
 

Gesendet: Montag, 29. September 2014 um 23:25 Uhr
Von: Abelard Hoffman abelardhoff...@gmail.com
An: Felix Kunde felix-ku...@gmx.de
Cc: pgsql-general@postgresql.org pgsql-general@postgresql.org
Betreff: Re: [GENERAL] table versioning approach (not auditing)

Thank you Felix, Gavin, and Jonathan for your responses.
 
Felix  Jonathan: both of you mention just storing deltas. But if you do that, 
how do you associate the delta record with the original row? Where's the PK 
stored, if it wasn't part of the delta?
 
Felix, thank you very much for the example code. I took a look at your table 
schemas. I need to study it more, but it looks like the way you're handling the 
PK, is you're adding a separate synthethic key (audit_id) to each table that's 
being versioned. And then storing that key along with the delta.
 
So then to find all the versions of a given row, you just need to join the 
audit row with the schema_name.table_name.audit_id column. Is that right? The 
only potential drawback there is there's no referential integrity between the 
audit_log.audit_id and the actual table.
 
I do like that approach very much though, in that it eliminates the need to 
interrogate the json data in order to perform most queries.
 
AH
 
 
 
On Mon, Sep 29, 2014 at 12:26 AM, Felix Kunde felix-ku...@gmx.de wrote:Hey
 
i've also tried to implement a database versioning using JSON to log changes in 
tables. Here it is: https://github.com/fxku/audit[https://github.com/fxku/audit]
I've got two versioning tables, one storing information about all transactions 
that happened and one where i put the JSON logs of row changes of each table. 
I'm only logging old values and not complete rows.
 
Then I got a function that recreates a database state at a given time into a 
separate schema - either to VIEWs, MVIEWs or TABLES. This database state could 
then be indexed in order to work with it. You can also reset the production 
state to the recreated past state.
 
Unfortunately I've got no time to further work on it at the moment + I have not 
done tests with many changes in the database so I can't say if the recreation 
process scales well. On downside I've realised is that using the json_agg 
function has limits when I've got binary data. It gets too long. So I'm really 
looking forward using JSONB.

There are more plans in my mind. By having a Transaction_Log table it should be 
possible to revert only certain transactions. I'm also thinking of parallel 
versioning, e.g. different users are all working with their version of the 
database and commit their changes to the production state. As I've got a unique 
history ID for each table and each row, I should be able to map the affected 
records.

Have a look and tell me what you think of it.

Cheers
Felix
 

Gesendet: Montag, 29. September 2014 um 04:00 Uhr
Von: Abelard Hoffman abelardhoff...@gmail.com
An: pgsql-general@postgresql.org pgsql-general@postgresql.org
Betreff: [GENERAL] table versioning approach (not auditing)

Hi. I need to maintain a record of all changes to certain tables so assist in 
viewing history and reverting changes when necessary (customer service makes an 
incorrect edit, etc.).
 
I have studied these two audit trigger examples:
https://wiki.postgresql.org/wiki/Audit_trigger[https://wiki.postgresql.org/wiki/Audit_trigger]
https://wiki.postgresql.org/wiki/Audit_trigger_91plus
 
I've also read about two other approaches to versioning:
1. maintain all versions in one table, with a flag to indicate which is the 
current version
2. have a separate versions table for each real table, and insert into the 
associated version table whenever an update or insert is done.
 
My current implementation is based on the wiki trigger examples, using a single 
table, and a json column to record the row changes (rather than hstore). What I 
like about that, in particular, is I can have a global, chronological view of 
all versioned changes very easily.
 
But there are two types of queries I need to run.
1. Find all changes made by a specific user
2. Find all changes related to a specific record
 
#1 is simple to do. The versioning table has a user_id

Re: [GENERAL] table versioning approach (not auditing)

2014-09-29 Thread Felix Kunde
Hey
 
i've also tried to implement a database versioning using JSON to log changes in 
tables. Here it is: https://github.com/fxku/audit
I've got two versioning tables, one storing information about all transactions 
that happened and one where i put the JSON logs of row changes of each table. 
I'm only logging old values and not complete rows.
 
Then I got a function that recreates a database state at a given time into a 
separate schema - either to VIEWs, MVIEWs or TABLES. This database state could 
then be indexed in order to work with it. You can also reset the production 
state to the recreated past state.
 
Unfortunately I've got no time to further work on it at the moment + I have not 
done tests with many changes in the database so I can't say if the recreation 
process scales well. On downside I've realised is that using the json_agg 
function has limits when I've got binary data. It gets too long. So I'm really 
looking forward using JSONB.

There are more plans in my mind. By having a Transaction_Log table it should be 
possible to revert only certain transactions. I'm also thinking of parallel 
versioning, e.g. different users are all working with their version of the 
database and commit their changes to the production state. As I've got a unique 
history ID for each table and each row, I should be able to map the affected 
records.

Have a look and tell me what you think of it.

Cheers
Felix
 

Gesendet: Montag, 29. September 2014 um 04:00 Uhr
Von: Abelard Hoffman abelardhoff...@gmail.com
An: pgsql-general@postgresql.org pgsql-general@postgresql.org
Betreff: [GENERAL] table versioning approach (not auditing)

Hi. I need to maintain a record of all changes to certain tables so assist in 
viewing history and reverting changes when necessary (customer service makes an 
incorrect edit, etc.).
 
I have studied these two audit trigger examples:
https://wiki.postgresql.org/wiki/Audit_trigger
https://wiki.postgresql.org/wiki/Audit_trigger_91plus
 
I've also read about two other approaches to versioning:
1. maintain all versions in one table, with a flag to indicate which is the 
current version
2. have a separate versions table for each real table, and insert into the 
associated version table whenever an update or insert is done.
 
My current implementation is based on the wiki trigger examples, using a single 
table, and a json column to record the row changes (rather than hstore). What I 
like about that, in particular, is I can have a global, chronological view of 
all versioned changes very easily.
 
But there are two types of queries I need to run.
1. Find all changes made by a specific user
2. Find all changes related to a specific record
 
#1 is simple to do. The versioning table has a user_id column of who made the 
change, so I can query on that.
 
#2 is more difficult. I may want to fetch all changes to a group of tables that 
are all related by foreign keys (e.g., find all changes to user record 849, 
along with any changes to their articles, photos, etc.). All of the data is 
in the json column, of course, but it seems like a pain to try and build a 
query on the json column that can fetch all those relationships (and if I mess 
it up, I probably won't generate any errors, since the json is so free-form).
 
So my question is, do you think using the json approach is wrong for this case? 
Does it seem better to have separate versioning tables associated with each 
real table? Or another approach?
 
Thanks
 
 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] memory issues with BYTEAs in JSON and hstore

2014-05-27 Thread Felix Kunde
Hey,
I've developed a database auditing/versioning using the JSON data type 
(http://github.com/fxku/audit) and doing some tests now. 
Unfortunately I'm facing some problems when dealing with tables that store 
images as BYTEA. Some of them are around 15 MB big.

My tool logs changes to JSON and can populate it back to views with 
json_populate_recordset and json_agg. When performing this procedure on tables 
containing BYTEAs I receive an 54000 error (Cannot enlarge string buffer). Is 
this because of json_agg or because of single rows? 

Executing to_json on the whole column that contains the binary data (size above 
500 MB) lead to out-of-memory errors. The same goes for hstore. Executing these 
functions only on the biggest image was successful but freezed my pgAdmin. When 
I encoded BYTEA to TEXT before transforming it to JSON or hstore it worked. But 
trying json_populate_recordset still runs into memory problems (but explain 
worked). 

Do you think JSONB will solve my problems in the future?

Here is also a comparison in size between the bytea (and encoded versions to 
TEXT) and JSON / hstore output which I found kinda interesting:

operation | bytea | bytea-'escape' | bytea-'hex' | bytea-'base64'
--|---|-|--|
  | 15 MB | 40 MB   | 31 MB| 21 MB
to_json() | 57 MB | 57 MB   | 31 MB| 21 MB
hstore()  | 46 MB | 40 MB   | 31 MB| 21 MB

Thanks in advance for any hints.
Cheers,
Felix


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] comparing two JSON objects in 9.3

2014-05-12 Thread Felix Kunde
Ahoi



Im developing an auditing mechanism where table contents are logged as JSON (github.com/fxku/audit).

At first I just logged the whole row everytime my triggers were fired.

Now Im trying to log only the old state of values that have changed.



Therefore I compare the OLD and NEW entry by using json_each(json).



SELECT old.key, old.value
FROM json_each(row_to_json(OLD)) old
LEFT OUTER JOIN json_each(row_to_json(NEW)) new ON old.key = new.key
WHERE new.value::text  old.value::text OR new.key IS NULL;



Now I aggregate the result to arrays of JSON and call this function, that Ive written:


CREATE OR REPLACE FUNCTION audit.build_json(json_keys ANYARRAY, json_values ANYARRAY) RETURNS JSON AS

DECLARE
 json_string TEXT := {;
 delimeter TEXT := ;
 json_result JSON;
BEGIN
 FOR i IN array_lower(json_keys, 1)..array_upper(json_keys, 1) LOOP
 json_string := json_string  delimeter  json_keys[i]  :  json_values[i];
 delimeter := ,;
 END LOOP;

 json_string := json_string  };

 EXECUTE format(SELECT %L::json, json_string) INTO json_result;
 RETURN json_result;
END

LANGUAGE plpgsql;



In the end the call looks like this:

SELECT audit.build_json(array_agg(to_json(old.key)), array_agg(old.value))

FROM json_each(row_to_json(OLD)) old
LEFT OUTER JOIN json_each(row_to_json(NEW)) new ON old.key = new.key
WHERE new.value::text  old.value::text OR new.key IS NULL;




Everything works as expected, but it feels kinda ugly to me.

Any PG-JSON experts around to tell me a better solution?

Did i trapped into some YouShouldNots?



Regards!

Felix Kunde



[GENERAL] How to convert a double value to a numeric datum type in pgsql?

2014-02-23 Thread Felix . 徐
Hi all,
I'm talking about the source code of pgsql and I want to know how the
typing system works in pgsql.
A few functions can help us do the type conversion, for example:

Int32GetDatum -- convert a integer to a datum
Float8GetDatum -- convert double to a datum
cstring_to_text -- convert a string to a text
...

but there are a lot of types in pgsql,how to choose the corresponding
function? or is there a more automatic way to do the type conversion(raw
chars to a datum, suppose I have the corresponding Form_pg_type instance)?

I ask this question because I don't know how to convert a double value to a
numeric datum(the field's type is numeric(10,2)), pg_type shows that
numeric's typlen is -1 whose length is variable thus Float8GetDatum is not
working..


Re: [GENERAL] Temporary table already exists

2014-01-31 Thread Felix Kunde

I had a similar problem once. The pool is reusing connections and the temporary tables are still there.

Now I always create new temporary tables with a unique name like this:



tmpTableId = TMP + Math.abs(generateUUID().hashCode());
if (tmpTableId.length()  15)
 tmpTableId = tmpTableId.substring(tmpTableId.length() - 15, tmpTableId.length());

conn.setAutoCommit(true);
tableStmt = conn.createStatement();

try {
 // create global temporary tables
 tableStmt.executeUpdate(create temporary table TABLE_ANME_ + tmpTableId + ( ... ) on commit preserve rows);



etc.



Then you have to add the tmpTableId to every statement in your code but it should work fine.



Gesendet:Freitag, 31. Januar 2014 um 12:04 Uhr
Von:mephysto mephystoonh...@gmail.com
An:pgsql-general@postgresql.org
Betreff:Re: [GENERAL] Temporary table already exists

Hi Albe, this is code of my stored function:
CREATE OR REPLACE FUNCTION :FUNCTION_SCHEMA.get_deck_types
	(
		p_id_deck BIGINT
	)
	RETURNS BIGINT[] AS
	
	DECLARE
		l_id_user BIGINT;
		l_cards_number INTEGER;
		l_deck_type BIGINT;
		l_result BIGINT[];
	BEGIN
		SELECT INTO STRICT l_id_user id_user
		FROM ccg_schema.decks_per_user
		WHERE id = p_id_deck;
		
		CREATE LOCAL TEMPORARY TABLE deck_types
		ON COMMIT DROP
		AS
		SELECT stored_functions_v0.get_card_deck_types(t1.id_master_card) AS deck_type_ids
		FROM ccg_schema.deck_composition T0
		,ccg_schema.cards_per_user T1
		WHERE id_deck = p_id_deck
		  AND t1.id_owner = l_id_user
		  AND t0.id_card = t1.id_card;
		
		SELECT INTO l_cards_number COUNT(*)
		FROM deck_types;
		
		FOR l_deck_type IN SELECT DISTINCT unnest(deck_type_ids) FROM deck_types LOOP
			IF (l_cards_number = (SELECT COUNT(*) FROM (SELECT unnest(deck_type_ids) AS id FROM deck_types) T0 WHERE id = l_deck_type)) THEN
l_result := array_append(l_result, l_deck_type);
			END IF;
		END LOOP;
		
		RETURN l_result;
	END;
	
	LANGUAGE PLPGSQL VOLATILE;
ConnectionPool reuse connections, of course, but how you can see from my code, the temporary table deck_types are already defined with ON COMMIT DROP clause, so I think that my work is not in transaction. Am I true? If so, how can I put my code in transaction? Many thanks. Mephysto

 View this message in context: Re: Temporary table already exists
Sent from the PostgreSQL - general mailing list archive at Nabble.com.






[GENERAL] What is the correct way to get the content of a varchar field in pgsql's source code

2014-01-20 Thread Felix . 徐
Hi all,
I've learnt from the source code to open a table and scan it, like this:

Relation qma = try_relation_open(qmappersta, AccessShareLock);
  if(qma!=NULL){
HeapScanDesc scan= heap_beginscan(qma,SnapshotNow,0,NULL);
  HeapTuple tup;
TupleDesc tupDesc= RelationGetDescr(qma);
Datum   *values;
  bool *nulls;
int value1;
char value2;
while ((tup = heap_getnext(scan, ForwardScanDirection)) != NULL){
heap_deform_tuple(tup,tupDesc,values, nulls);
/* int values can be retrieved easily */
value1=values[0];
/* how to retrieve the content of a varchar field here? */
/* What I've tried: */
value2=VARDATA_ANY(values[2]);
/* But value2 seems to be incorrect
   if the original data is abcd, then I will get abcd2 here(in
most cases, a 2 is appended to the data...), I'm wondering  how to get
the data properly...
   I've also tried VARDATA macro and encountered with the same
problem.
*/
}
  }

Thanks!


[GENERAL] Breakpoints are not triggered in analyze.c (debugging Postgresql in Visual studio)

2014-01-16 Thread Felix . 徐
Hi all,

I've set up a developing environment on my windows using Visual Studio
2012, everything works fine, except that the breakpoints set in analyze.c
are not triggered in debug mode (breakpoints in main.c and some
initialization code worked well), and I'm sure that line has been executed
since my own messages have been printed on the console. Does anyone have
any experience debugging postgresql in Visual Studio?

Thanks!


[GENERAL] How are pg_operator and pg_type related with each other?

2014-01-15 Thread Felix . 徐
Hi all,
I'm studying pg_statistic table and I find that column staop is related to
pg_operator, and different data types relate to different staop, but I
don't know where pgsql stores the mapping between pg_type and pg_operator,
does anyone have any idea about it? thanks!


Re: [GENERAL] How are pg_operator and pg_type related with each other?

2014-01-15 Thread Felix . 徐
Hi Amit
I understand, I've read the source code of analyze.c and implemented a java
version.
Stakind1(most common values) indicates = operator and stakind2(histogram)
indicates  operator by default,
I'm wondering where I can find the corresponding operatorID of eq/lt for a
specific data type.
For example,
= operator for the type varchar is texteq and  operator for
varchar is text_lt
= operator for the type int4 is int4eq and  operator for int4 is
int4lt  etc.

And another question is that how to update or insert a column with type of
anyarray, since I want to mock the statistic data of tables, the type of
stavalues in pg_statistic is anyarray, is there any way to manually modify
that column, by some kind of function or hook?
If I ran the query like:

UPDATE pg_statistic
   SET stavalues2=array[1,2,8]
 WHERE ...

Then I will get error 42804 indicates that the expected type is anyarry
but text[] is found in the query.


Thanks very much!



2014/1/15 Amit Langote amitlangot...@gmail.com

 On Wed, Jan 15, 2014 at 11:08 PM, Felix.徐 ygnhz...@gmail.com wrote:
  Hi all,
  I'm studying pg_statistic table and I find that column staop is related
 to
  pg_operator, and different data types relate to different staop, but I
 don't
  know where pgsql stores the mapping between pg_type and pg_operator, does
  anyone have any idea about it? thanks!

 Rather, different kinds of statistics are related to different
 operators. So, staop2 would refer to an operator suitable/applicable
 for the statistics of kind stakind2.

 For example stakind2 for some attribute could be value 3 which
 refers to statistic kind histogram. In this case, staop2 for the
 same attribute could refer to operator, say,  because this
 particular operator could benefit from histogram distribution of
 values. (off course,  may be overloaded for different types; but
 point to take from this is that any  uses the statistic called
 histogram.)

 --
 Amit Langote



[GENERAL] Audit database to recreate former states using JSON functions

2014-01-14 Thread Felix Kunde
Hello



last december Ive delvoped a few PL/pgSQL functions to log

all changes in my database and recreate former database states.

I used row_to_json to log and json_populate_recordset to reproduce

my tables. I think its cool feature and I like to share and discuss it

with you. I would be very happy to recieve some feedback. Im also

interested in other versioning approaches, if there are (?).



You can find the repository here: https://github.com/FxKu/audit

Ive written all the relevant stuff into the README.



At last, I really like to thank Hans-Jrgen Schnig (http://www.cybertec.at/)

for giving me the advice to use JSON functions. This made the whole

developement way more easy.



Have a great day.

Felix







Re: [GENERAL] How to specify/mock the statistic data of tables in PostgreSQL

2014-01-12 Thread Felix . 徐
I see, thanks.

I'm looking into the source code of statistic part now, and I'm a little
confused about the column staop presented in table pg_statistic,
in the pg_statisitc.h, the comment says:

/* 
 * To allow keeping statistics on different kinds of datatypes,
 * we do not hard-wire any particular meaning for the remaining
 * statistical fields. Instead, we provide several slots in which
 * statistical data can be placed. Each slot includes:
 * kind integer code identifying kind of data (see below)
 * op OID of associated operator, if needed
 * numbers float4 array (for statistical values)
 * values anyarray (for representations of data values)
 * The ID and operator fields are never NULL; they are zeroes in an
 * unused slot.  The numbers and values fields are NULL in an unused
 * slot, and might also be NULL in a used slot if the slot kind has
 * no need for one or the other.
 * 
 */
And,
//line 194 : In a most common values slot, staop is the OID of the =
operator used to decide whether values are the same or not.
//line 206 : A histogram slot describes the distribution of scalar data.
 staop is the OID of the  operator that describes the sort ordering.


I don't understand the function of staop here, how is it used in optimizer,
is there any example ? thanks!



2014/1/10 Amit Langote amitlangot...@gmail.com

 On Fri, Jan 10, 2014 at 11:19 PM, Atri Sharma atri.j...@gmail.com wrote:
 
 
  Sent from my iPad
 
  On 10-Jan-2014, at 19:42, ygnhzeus ygnhz...@gmail.com wrote:
 
  Thanks for your reply.
  So correlation is not related to the calculation of selectivity right?
 If I
  force PostgreSQL not to optimize the join order (by setting
  join_collapse_limit and from_collapse_limit  to 1) , is there any other
  factor that may affect the structure of execution plan regardless of the
  data access method.
 
  2014-01-10
  
  ygnhzeus
  
  发件人:Amit Langote amitlangot...@gmail.com
  发送时间:2014-01-10 22:00
  主题:Re: [GENERAL] How to specify/mock the statistic data of tables in
  PostgreSQL
  收件人:ygnhzeusygnhz...@gmail.com
  抄送:pgsql-generalpgsql-general@postgresql.org
 
 
 
  AFAIK, correlation is involved in calculation of the costs that are used
 for
  deciding the type of access.If the correlation is low, index scan can
 lead
  to quite some random reads, hence leading to higher costs.
 

 Ah, I forgot to mention this point about how planner uses correlation
 for access method selection.

 And selectivity is a function of statistical distribution of column
 values described in pg_statistic by histograms, most common values
 (with their occurrence frequencies), number of distinct values, etc.
 It has nothing to do with correlation.

 --
 Amit Langote



[GENERAL] How can I detect if a schema exists?

2013-01-09 Thread felix
I was reviving an old test program which hasn't been used since 9.1 and found 
that SET search_path = xyzzy no longer fails if the schema xyzzy doesn't 
exist.

Is there an approved or recommended way to tell if a schema exists?  I can 
think of lots of ways, but none as easy as the 9.1 search_path.  I suppose I 
shouldn't rely on undocumented features ...

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / fe...@crowfix.com
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How can I detect if a schema exists?

2013-01-09 Thread felix
On Thu, Jan 10, 2013 at 09:11:59AM +0900, Ian Lawrence Barwick wrote:

 How about:
 
   SELECT TRUE FROM information_schema.schemata WHERE schema_name = 'xyzzy'
 
 ? (Though I notice this table only seems to show the schemas owned by the
 user if the user is not a superuser).

I'll have to play with the users and permissions, but that sounds like
it could be a feature; the code wants to know if the user can use the
schema.

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / fe...@crowfix.com
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Feature discussion: Should syntax errors abort a transaction?

2012-06-20 Thread felix
On Tue, Jun 19, 2012 at 11:25:24AM -0600, Scott Marlowe wrote:
 On Tue, Jun 19, 2012 at 8:50 AM, Edson Richter edsonrich...@hotmail.com 
 wrote:
  There is also the case of dynamically generated sql statements based on 
  user selection... being syntax or not, I would never want half job done. 
  Thia is the purpose of transactions: or all or nothing...
 
 This this this, and again, this.  Imagine:
 
 begin;
 insert into tableb selcet * from tableb;
 truncate tableb;
 commit;
 
 What should happen when we get to the error on the second line?  Keep
 going?  Boom, data gone because of a syntax error.

I've been lurking, and maybe I should go back to that :-) but I think you 
misunderstand.  The idea is not to ignore or second-guess typoes, but to report 
them without affecting the transaction, and only do this in interactive 
sessions.

Personally, I like the idea of BEGIN INTERACTIVE, but note I do not offer to do 
the work.

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / fe...@crowfix.com
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Feature discussion: Should syntax errors abort a transaction?

2012-06-20 Thread felix
On Wed, Jun 20, 2012 at 06:36:09AM -0700, fe...@crowfix.com wrote:
 On Tue, Jun 19, 2012 at 11:25:24AM -0600, Scott Marlowe wrote:
  On Tue, Jun 19, 2012 at 8:50 AM, Edson Richter edsonrich...@hotmail.com 
  wrote:
   There is also the case of dynamically generated sql statements based on 
   user selection... being syntax or not, I would never want half job done. 
   Thia is the purpose of transactions: or all or nothing...
  
  This this this, and again, this.  Imagine:
  
  begin;
  insert into tableb selcet * from tableb;
  truncate tableb;
  commit;
  
  What should happen when we get to the error on the second line?  Keep
  going?  Boom, data gone because of a syntax error.
 
 I've been lurking, and maybe I should go back to that :-) but I think you 
 misunderstand.  The idea is not to ignore or second-guess typoes, but to 
 report them without affecting the transaction, and only do this in 
 interactive sessions.
 
 Personally, I like the idea of BEGIN INTERACTIVE, but note I do not offer to 
 do the work.

Looks like I should go back to lurking, and do better at it :-(

The discussion began about differentiatng typoes and other errors, which is 
clearly not easy or obvious, and something that has always frustrated me when I 
find programs which try to do so.  Then I saw the idea of BEGIN INTERACTIVE and 
lost sight of the discussion.

My apoligies for stepping in so badly.

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / fe...@crowfix.com
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Slow COMMIT statements

2011-12-07 Thread Felix Feinhals
Hello List,
 
2011-12-07 11:52:52 CET 8075 xxx [local]LOG:  duration: 1168.953 ms
statement: COMMIT
2011-12-07 11:52:52 CET 7327 xxx [local]LOG:  duration: 1116.119 ms
statement: COMMIT
2011-12-07 11:52:52 CET 31727 xxx  [local]LOG:  duration: 1131.044 ms
statement: COMMIT
2011-12-07 11:52:52 CET 25077 xxx [local]LOG:  duration: 1064.599 ms
statement: COMMIT
 
these messages show up in the postgresql logs at irregular intervals. We
got no slow queries before or after these statements. Checkpoint logging
is enabled also but these don't show up either, well they do but minutes
later ;).
I would like to debug these slow COMMITs but don't know what is causing
them.
The server itself is idling most of the time... with a load avg around
1-2(with 64 cores) and iowait around 0.5 - 1%.
 
Notable changes from default config:
 
log_min_duration_statement = 500
log_checkpoints = on
checkpoint_segments = 40

Regards,
 
Felix


[GENERAL] Gentoo, 8,2 --- 8.4, and /var/run/postgresql in mode 770

2010-01-09 Thread felix
I just upgraded my home gentoo system's postgresql from 8.2.14 to
8.4.2.  I use it mostly for fooling around and keeping smatterings of
personal data, so it was simple laziness which kept me from upgrading
sooner, triggered by the gentoo switch back in 8.2.mumble in how they
manage postgresql.

Everything went smoothly except the permissions of the directory
/var/run/postgresql with the domain socket .s.PGSQL.5432.  This dir
had permissions of 770, owned by postgres.postgres, so no mere mortals
could access it.  I have changed this to 775 and can now access it.

Didn't 8.2 put these in /tmp?  Maybe this was a gentoo thing.  What
should the permissions be for this?  Or does gentoo do their own thing
and there is a different standard way of handling this?

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / fe...@crowfix.com
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Gentoo, 8,2 --- 8.4, and /var/run/postgresql in mode 770

2010-01-09 Thread felix
On Sat, Jan 09, 2010 at 11:18:19PM +0100, Leif Biberg Kristensen wrote:

 In Gentoo, you must add the database owner (probably your own username) to 
 the 
 group Postgres. This was changed with 8.3, and if you had read the message 
 from emerge, you should have noticed. And, yes, I fumbled a lot with this 
 myself.

Ahh ... I skipped 8.3 and didn't see that message.

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / fe...@crowfix.com
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Database schema dumper

2009-06-29 Thread felix
I'd like to dump a database schema to a file, probably XML but
anything reasonable is good enough.  By schema, I don't mean the
narrow postgres keyword, but rather the table names, columns,
foreignkeys, triggers, constraints, etc.

I'd really like something that could work for other databases too,
including O-, M, etc.  But that might be asking too much.

A quick google for variations on dump database schema didn't find
much.

Whether it be a CPAN module, Java class, etc, or a standalone program,
none of that matters much.

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / fe...@crowfix.com
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database schema dumper

2009-06-29 Thread felix
On Mon, Jun 29, 2009 at 04:32:46PM -0600, Scott Marlowe wrote:

 Have you tried pg_dump -s yet?

We  I know I said the format is immaterial, and I know I could
write something to convert it into something more useful, but it is on
the low end of what I was looking for, and is very much PostgreSQL
only, not any chance of converting it for use with other databases.

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / fe...@crowfix.com
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database schema dumper

2009-06-29 Thread felix
On Tue, Jun 30, 2009 at 12:21:22AM +0200, Thomas Kellerer wrote:

 Take a look at my SQL Workbench/J, especially the WbReport command:
 
 http://www.sql-workbench.net/index.html
 
 http://www.sql-workbench.net/manual/wb-commands.html#command-schema-report

That just may do the trick.  I'll explore it a bit tonight.  Looks
like a lot of work has gone into it.

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / fe...@crowfix.com
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Alias name from subquery

2008-09-30 Thread Felix Homann

Hi,

I would like to set an alias name for a column from a subquery, i.e. 
something like this:


SELECT entry AS (SELECT name from colnames WHERE id=1) from entries ;

Obviously it doesn't work _this_ way, but is there _any_ way to do it?

Kind regards,

Felix

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Alias name from subquery

2008-09-30 Thread Felix Homann

Hi Jeffrey,

Thanks for your quick response!


Hoover, Jeffrey wrote:


select (SELECT name from colnames WHERE id=1) as entry from entries;



I think, I should have been a bit clearer in what I need:

I've got two tables, colnames and entries:

test=# SELECT * from colnames;
 id | name
+--
  1 | col1
(1 row)

test=# SELECT entry from entries;
 entry

 first
 second
 third
(3 rows)

I would like to get the column name entry replaced by an alias col1, 
just like this:


test=# SELECT entry as col1 from entries;
  col1

 first
 second
 third
(3 rows)

_But_, I don't want to give the alias explicitely, instead it should be 
taken from a second table 'colnames', i.e. something like the line I 
sent in my initial mail. Any idea?


Thanks again,

Felix

BTW, here's what I get from your command:

test=# select (SELECT name from colnames WHERE id=1) as entry from entries;
 entry
---
 col1
 col1
 col1
(3 rows)




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Alias name from subquery

2008-09-30 Thread Felix Homann

Raymond O'Donnell wrote:

select entry from (select name from colnames where id = 1) as
entry_with_different_name;

...maybe?


Thanks Ray!

No, entry_with_different_name should be the result of SELECT name 
FROM colnames WITH id=1.



Kind regards,

Felix

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Alias name from subquery

2008-09-30 Thread Felix Homann

Hi Sam,

Sam Mason wrote:

I think you may be trying to solve the wrong problem, what are you
really trying to do?


Here you go. I have some tables created like this:

CREATE TABLE player(
id INTEGER PRIMARY KEY,
name   varchar(20)
);

CREATE TABLE transfer(
id SERIAL PRIMARY KEY,
fromID INTEGER REFERENCES player(id),
toID   INTEGER REFERENCES player(id),
amount numeric
);

Now, let's fill them with some data:

INSERT INTO player VALUES ('1', 'Peter'), ('2','David'), ('3','Fritz');
INSERT INTO transfer(fromID, toID, amount) VALUES ('1','2','3'), ('1', 
'3', '1'), ('2','1','60');


I would now like to have something like a view that transforms the table 
transfer from this:


test=# SELECT * from transfer;
 id | fromid | toid | amount
++--+
  1 |  1 |2 |  3
  2 |  1 |3 |  1
  3 |  2 |1 | 60


into this:

id | Peter | David | Fritz | ...even more Names from player table
---+---+---+---+-
 1 |-3 | 3 | 0 | 0
 2 |-1 | 0 | 1 | 0
 3 |60 |   -60 | 0 | 0


In other words, I would like to have a named column for every Name in 
the player table. I _can_ create such a view manually if I know each 
player.name beforehand, but I don't know how to automate it for any 
given number of players. (Think of something like a sparse interaction 
matrix representation.)


Maybe it's the wrong problem I'm trying to solve, but at least I would 
 like to know whether it's possible or not.


Kind regards,

Felix

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Alias name from subquery

2008-09-30 Thread Felix Homann
Thanks to Scott and Taras for pointing me to the crosstab functions. I 
only had a quick look but they seem very helpful!


Kind regards,

Felix

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psqlodbc on Vista Ultimate 64

2008-08-11 Thread Lucas Felix
thank you!!!

=]

2008/8/8 Hiroshi Saito [EMAIL PROTECTED]

 Hi.

 Is this helpful?
 http://winpg.jp/~saito/psqlODBC/psqlODBC64/http://winpg.jp/%7Esaito/psqlODBC/psqlODBC64/
 as for AMD64.
 http://www.geocities.jp/inocchichichi/psqlodbc/index.html

 Regards,
 Hiroshi Saito

 - Original Message - From: Lucas Felix
 To: pgsql-general@postgresql.org
 Sent: Saturday, August 09, 2008 12:00 AM
 Subject: [GENERAL] psqlodbc on Vista Ultimate 64



 Olá, estou com um problema, o Windows Vista não reconhece o odbc do
 PostgreSQL, alguma dica?

 Hello, I have a problem, Windows Vista does not recognize the odbc of
 PostgreSQL, any hint?

 --
 Lucas Felix de Sousa
 Técnico em Informática
 [EMAIL PROTECTED]

 Se enxerguei mais longe que outros homens, foi porque me ergui em ombros
 de gigantes.




-- 
Lucas Felix de Sousa
Técnico em Informática
[EMAIL PROTECTED]

Se enxerguei mais longe que outros homens, foi porque me ergui em ombros de
gigantes.


[GENERAL] psqlodbc on Vista Ultimate 64

2008-08-08 Thread Lucas Felix
Olá, estou com um problema, o Windows Vista não reconhece o odbc do
PostgreSQL, alguma dica?

Hello, I have a problem, Windows Vista does not recognize the odbc of
PostgreSQL, any hint?

-- 
Lucas Felix de Sousa
Técnico em Informática
[EMAIL PROTECTED]

Se enxerguei mais longe que outros homens, foi porque me ergui em ombros de
gigantes.


[GENERAL] query large amount of data in c++ using libpq

2007-08-18 Thread Felix
Hi all,
i am using PQexecParams() to SELECT about 3 million record in C++, and it
takes several minutes to make it done with used memory dramatically
incresed(about 200MB).
it seems when i using PQexecParams(), i can't use the query result before
all the data is sent to client.
is there something like server side cursor in libpq?
or is there any other way to do this in c++?
some articles say that query for large amount of data is automaticall
handled well by pg, but i can't see how.

thanks


[GENERAL] query large amount of data in c++ using libpq

2007-08-18 Thread Felix Ji
 Hi all,
i am using PQexecParams() to SELECT about 3 million record in C++, and it
takes several minutes to make it done with used memory dramatically
incresed(about 200MB).
it seems when i using PQexecParams(), i can't use the query result before
all the data is sent to client.
is there something like server side cursor in libpq?
or is there any other way to do this in c++?
some articles say that query for large amount of data is automaticall
handled well by pg, but i can't see how.

thanks


Re: [GENERAL] Wider than float8

2007-06-08 Thread felix
On Fri, Jun 08, 2007 at 03:00:35PM -0400, Bruce Momjian wrote:
 No.  Frankly I didn't know 12-byte floats were supported in CPUs until
 you posted this.  You could write your own data type to use it, of
 course.

I didn't either, and have no use for them, but curiousity compels me
to wonder how hard this would be.  I don't have the sources right at
hand.  Is all float8 code in one source file, and would it be a pretty
simple matter to add float 12 with a search and replace and not much
more?

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / [EMAIL PROTECTED]
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] How access table by tableoid

2007-05-13 Thread Felix Kater
Hi,

can I use a given tableoid (instead of the tablename) to select
columns from that table somehow?

SELECT * FROM ??tableoid??

My first approach was to lauch a subquery of the information_schema
like this:

SELECT * FROM (SELECT relname FROM pg_class WHERE oid=999) AS
tablename

However, it turned out that the subquery returns the refered *column*
which is the 'relname' column but not the table's name. (With other
words: The final result is a single 'relname' column instead of all
columns from the tablename inside the relname column.) This is not was I
intended.

So, I worked around that by peforming two queries: The first to retrieve
the table's name from pg_class via its OID, the second to select the
wanted columns from that table using the table's name as usual.

Can I do it in one go using the table´s OID?

Thank You
Felix

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] pg_contraint: 'action code' ?

2007-05-09 Thread Felix Kater
On Tue, 08 May 2007 10:03:24 -0400
Tom Lane [EMAIL PROTECTED] wrote:

  * the FKCONSTR_MATCH_xxx constants defined in parsenodes.h.

True! ;-)
Thank you so much.

Felix

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] are foreign keys realized as indexes?

2007-05-09 Thread Felix Kater
On Tue, 8 May 2007 15:54:08 +0200
Martijn van Oosterhout [EMAIL PROTECTED] wrote:

 A unique index is not a substitute for a unique constraint, they're
 exactly the same thing. If you drop your constraint and create a
 unique index, you're back where you started. You neither added nor
 removed anything.

Yes. For this reason I didn't have to implement *both* 'unique
constraints' *and* 'unique indices' in my pg interface.


 On a certain level foreign keys are just triggers, specially coded to
 do the work. Yes, you could write your own triggers to do exactly the
 same thing, but why bother, when someone has written them for you and
 made nice syntax to use them?

My question simply was if I could save coding time... like with 'unique
constaints' and 'indeces', see above. However, for what I have learned
now, 'foreign keys' can *not* be substituted by indeces, so I have to
implement them.

Thanks again.

Felix

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] are foreign keys realized as indexes?

2007-05-08 Thread Felix Kater
Hi,

I like to keep my pg interface small: Can I replace foreign keys by
using indexes somehow? (This is at least possible for primary key
columns which can be replaced by suitable indexes.)

Thank You
Felix

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] are foreign keys realized as indexes?

2007-05-08 Thread Felix Kater
On Tue, 08 May 2007 12:48:30 +0100
Raymond O'Donnell [EMAIL PROTECTED] wrote:

 You can do that, but you'll lose the enforcement of referential 
 integrity, which is what foreign keys give you.

If I get you right:

There is *no complete* substitute for foreign keys by using *indexes*
since I'd loose the referencial integrity (whereas for unique contraints
there *is* a full replacement using indexes)?

Felix

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] are foreign keys realized as indexes?

2007-05-08 Thread Felix Kater
On Tue, 8 May 2007 14:19:12 +0200
Peter Eisentraut [EMAIL PROTECTED] wrote:

  I like to keep my pg interface small: Can I replace foreign keys by
  using indexes somehow?
 
 Not while preserving the semantics.

I am not bound to indexes, however, wonder if foreign keys itself are
non-atomic functionality. I mean: if foreign keys are based on some
other lower level functionality like indexes or anything else which I
could use as a substitute--in what way ever. Of course, I want to
gain the same (referential integrity etc.).

If foreign keys are, however, something unique which can't be replaced
by any other pg function (I am of course not taking into account things
like multiple queries bound together by transactions...) then I have to
go though it and implement it into my pg interface (looking at the
information_schema: This seems to be quite a bunch of work...).

Thank You
Felix

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] pg_contraint: 'action code' ?

2007-05-08 Thread Felix Kater
Hi,

where are the 'action code' (type char) of foreign keys defined for ON
DELETE resp. ON UPDATE for the actions NO ACTION, RESTRICT, CASCADE, SET
NULL, SET DEFAULT in pg_contraint?

In the manual (8.2) it is mentioned but not explained.

Thank You
Felix

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] binary representation of date and numeric

2007-04-27 Thread Felix Kater
Hi,

what is the binary representation of 'date' and 'numeric' data?

I am using PQexecParam (C binding libpq-8.0.12) with binary export
switched on. From the three examples in C code the types 'int4', 'bytea'
and strings are covered, however nothing about 'date', 'numeric'.

If there is no documentation and I need to examine the sources, could
anyone point me to the needed files or sections?

Thank You

Felix

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PQerrorMessage: suppress trailing new line?

2007-04-24 Thread Felix Kater
On Tue, 24 Apr 2007 11:14:43 +1200
Andrej Ricnik-Bay [EMAIL PROTECTED] wrote:

 On 4/24/07, Felix Kater [EMAIL PROTECTED] wrote:
  Hi,
 
  the messages returned by PQerrorMessage() contain a trailing new
  line. So, they doesn't nicely integrate into strings formatted by
  printf.
 
  Is there a way to suppress the trailing new line (except for
  alloc/copy to a new string)?
 The alloc kind of suggests that you're using C ... is that
 correct?  Either way - please be a bit more specific as to what you're
 trying to achieve by which means.

Sorry (was deep in thoughs while coding...). A bit more detailed:

I am using libpq, the C language pg library. There is a useful function
to return the last error message as human readable text: PQerrorMessage.

While the returned text message is static memory which is good (you
simply print it and don't have to free the memory) there is a trailing
new line character at each text message like this:
message textnewline0-byte-terminator

I find this newline a lot uncommon--since it is easy to add manually if
you really want it, and, however, it is not so easy to get rid of it.

Getting rid of it means: Copying the whole string without the newline,
pass it to the user, who has to free the copied memory after printing
it...

I wanted to know if there is a switch or similar somewhere to get the
messages without newline character.

Thank You
Felix

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] PQerrorMessage: suppress trailing new line?

2007-04-23 Thread Felix Kater
Hi,

the messages returned by PQerrorMessage() contain a trailing new line.
So, they doesn't nicely integrate into strings formatted by printf.

Is there a way to suppress the trailing new line (except for
alloc/copy to a new string)?

Thank You
Felix

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] unique ID across all columns

2007-04-17 Thread Felix Kater
Hi,

I need a unique ID across all columns I create. Is it o.k. to
achive this by combining pg_attribute.attnum of a column together with
the OID of the table the column is in?

While the table's OID should be unique according to the manual, I have
found no hints about the uniqueness of attnum -- I removed and added
some columns and it seems that already used values are not used again.

Felix

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] postgresl for mysql?

2007-04-01 Thread felix
On Sun, Apr 01, 2007 at 10:53:06AM -0700, Steve Atkins wrote:
 
 On Apr 1, 2007, at 10:33 AM, Anton Melser wrote:
 
 What will they think of next!
 http://krow.livejournal.com/502908.html
 I suppose it makes as much sense as the others, except why would you
 want to use mysql if the storage is in postgres?
 
 If you've inherited data in a postgresql database this will allow
 you to migrate it to the industry standard database without the
 inconvenience and downtime of a dump from postgresql and
 a restore into mysql.
 
 I don't think it's a new idea - IIRC, Aprile Pazzo did something
 similar for MySQL 3 and PG 7.something.

What an interesting name!  I don't know much Italian other than what
I've picked up from a few movies, but I think I now know what Pazzo
means ...

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / [EMAIL PROTECTED]
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Tabbed data in tab-separated output

2007-01-04 Thread felix
On Wed, Jan 03, 2007 at 10:31:46PM -0600, mike wrote:
 How are you attempting to restore the table after using psql?  Psql
 insert statements? Pgdump? COPY FROM?

Actually, right now I feed the dump file into a program which mangles
it and analyzes it in various ways.  It will eventually be fed into
psql for restoration elsewhere.  The problem isn't restoring it.  It's
not knowing how to tell which tabs are field separators and which are
part of the data.

 On Wed, 2007-01-03 at 17:59 -0800, [EMAIL PROTECTED] wrote:
  pg_dump apparently is smart enough to print embedded tabs as escaped
  chars, but not psql.  Is there a fix for this?  I thought of reverting
  to standard output, without the -t option, and analyzing the first two
  lines to tell exactly how many spaces are assigned to each column, but
  that gives me the shudders.

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / [EMAIL PROTECTED]
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Tabbed data in tab-separated output

2007-01-03 Thread felix
I have a table which has a few VARCHAR columns whose first character
is a tab.  If I run pg_dump on this table, it outputs data like this:

43158  \t555-12123

where the two embedded white spaces are actually tabs.

If I use psql to execute SQL to dump parts of the table, like this:

psql -qtA -f '\t' -U xyzzy xyzzy -c 'select ... /tmp/xxx

I get this:

43158   310-319-1333, x1070 3

where that initial embeded white space represents two tabs.  When I
use psql to restore this data, it thinks the 2nd column is empty and
complains that the third column is the wrong type.

pg_dump apparently is smart enough to print embedded tabs as escaped
chars, but not psql.  Is there a fix for this?  I thought of reverting
to standard output, without the -t option, and analyzing the first two
lines to tell exactly how many spaces are assigned to each column, but
that gives me the shudders.

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / [EMAIL PROTECTED]
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] How to split a table?

2006-10-17 Thread Felix Zhang
Hi,

I want to split a table to 2 small tables. The 1st one contains 60% records which are randomly selected from the source table.
How to do it?

Regards,
Felix


[GENERAL] Any documatation about porting from Oracle to PostgreSQL

2006-10-17 Thread Felix Zhang
Hi all,

I'm a newbie of PostgreSQL. I'm searching materials about porting from Oracle to PostgreSQL.
Anyone can share with me some good documatations?

Thanks and regards,
Felix


Re: [NOVICE] [GENERAL] How to split a table?

2006-10-17 Thread Felix Zhang
to do some statistics analysis.
2006/10/17, Andreas Kretschmer [EMAIL PROTECTED]:
Felix Zhang [EMAIL PROTECTED] schrieb:
 Hi, I want to split a table to 2 small tables. The 1st one contains 60% records which are randomly selected from the source table. How to do it?Why do you want to do this?
Andreas--Really, I'm not out to destroy Microsoft. That will just be a completelyunintentional side effect.(Linus Torvalds)If I was god, I would recompile penguin with --enable-fly.(unknow)
Kaufbach, Saxony, Germany, Europe.N 51.05082°, E 13.56889°---(end of broadcast)---TIP 4: Have you searched our list archives?
http://archives.postgresql.org


[GENERAL] Is it possible to have multiple names for a column?

2006-09-14 Thread felix
We have a need to rename some columns, but since we can't update both
the database and the programs instantly, we'd like to temporarily
assign both names to the same column while the updates are in
progress.  Something like this would be super nifty :-)

ALTER TABLE howdy_doody ADD NAME xyzzy TO COLUMN plugh;

I am pretty certain no such SQL command exists.  But is it possible to
do something sneaky to the internal tables so that two names point to
the same columnand everything just works?  Everything meaning updates,
inserts, etc, not further ALTER TABLE and so on.

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / [EMAIL PROTECTED]
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Query optimization and indexes

2006-08-18 Thread felix
Suppose I have an index on 5 columns (A, B, C, D, E). 

If my WHERE clause is not in that order, will the optimizer reorder
them as necessary and possible?

WHERE A=1 AND C=3 AND B=2 AND E=5 AND D=4

Obviously it can't reorder them in all cases:

WHERE A=1 AND (C=3 OR B=2) AND (E=5 OR D=4)

If I don't specify columns in the WHERE clause, how much can it use
the index?  I think it is smart enough to use beginning columns:

WHERE A=1 AND B=2

How about skipping leading columns?

WHERE B=2

How about skipping intermediate columns?

WHERE A=1 AND C=3

Or both, which is probably the same?  

WHERE B=2 AND D=4?

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / [EMAIL PROTECTED]
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Query runs fast or slow

2006-04-17 Thread felix
On Sun, Apr 16, 2006 at 04:32:25PM -0400, Tom Lane wrote:
 [EMAIL PROTECTED] writes:
  EXPLAIN ANALYZE doesn't show the slow timing
  because it requires values, not $n placeholders,
 
 To analyze the plan used for a parameterized query, try
 
   PREPARE foo(...) AS SELECT ... $n ...
 
   EXPLAIN ANALYZE EXECUTE foo(...)
 
 But I already know what you're going to find: the planner's estimates
 for the range query are not going to be very good when it has no idea
 what the range bounds are.  This is a situation where it may be best
 to absorb the hit of re-planning each time instead of using a generic
 parameterized plan.

OK, here is the new explain analyze.  I eliminated cache effects by
dumping the tables and picking random values with an editor.

felix= PREPARE foo(TEXT, INT, INT) AS SELECT s.data, g.key, g.val, g.sid FROM 
key k, val_int v, sid s, glue_int g WHERE (k.data = $1 AND k.id = g.key) AND 
(v.data = $2 AND v.data = $3) AND v.id = g.val AND g.sid = s.id;
PREPARE
felix= explain analyze execute foo('mthNQFrmVs3Q4bVruCxIAGy', 1973028023, 
1973028223);
 
QUERY PLAN  
   

 Nested Loop  (cost=1380.11..404223.36 rows=499 width=60) (actual 
time=5785.012..77823.688 rows=1 loops=1)
   -  Hash Join  (cost=1380.11..402713.38 rows=499 width=16) (actual 
time=5766.308..77804.969 rows=1 loops=1)
 Hash Cond: (outer.val = inner.id)
 -  Nested Loop  (cost=0.00..400829.78 rows=99701 width=16) (actual 
time=115.154..77401.159 rows=10 loops=1)
   -  Index Scan using key_data_key on key k  (cost=0.00..5.82 
rows=1 width=4) (actual time=0.125..0.132 rows=1 loops=1)
 Index Cond: (data = $1)
   -  Index Scan using glue_int_key_idx on glue_int g  
(cost=0.00..399577.70 rows=99701 width=16) (actual time=115.011..76570.366 
rows=10 loops=1)
 Index Cond: (outer.id = g.key)
 -  Hash  (cost=1378.86..1378.86 rows=500 width=4) (actual 
time=11.580..11.580 rows=0 loops=1)
   -  Index Scan using val_int_data_key on val_int v  
(cost=0.00..1378.86 rows=500 width=4) (actual time=11.556..11.561 rows=1 
loops=1)
 Index Cond: ((data = $2) AND (data = $3))
   -  Index Scan using sid_pkey on sid s  (cost=0.00..3.01 rows=1 width=52) 
(actual time=18.682..18.687 rows=1 loops=1)
 Index Cond: (outer.sid = s.id)
 Total runtime: 77823.897 ms
(14 rows)

A repeat shows it faster, from 77 seconds to 3.

felix= explain analyze execute foo('mthNQFrmVs3Q4bVruCxIAGy', 1973028023, 
1973028223);
   
QUERY PLAN  
  
-
 Nested Loop  (cost=1380.11..404223.36 rows=499 width=60) (actual 
time=205.137..2931.899 rows=1 loops=1)
   -  Hash Join  (cost=1380.11..402713.38 rows=499 width=16) (actual 
time=205.056..2931.803 rows=1 loops=1)
 Hash Cond: (outer.val = inner.id)
 -  Nested Loop  (cost=0.00..400829.78 rows=99701 width=16) (actual 
time=0.148..2564.255 rows=10 loops=1)
   -  Index Scan using key_data_key on key k  (cost=0.00..5.82 
rows=1 width=4) (actual time=0.031..0.039 rows=1 loops=1)
 Index Cond: (data = $1)
   -  Index Scan using glue_int_key_idx on glue_int g  
(cost=0.00..399577.70 rows=99701 width=16) (actual time=0.105..1808.068 
rows=10 loops=1)
 Index Cond: (outer.id = g.key)
 -  Hash  (cost=1378.86..1378.86 rows=500 width=4) (actual 
time=0.090..0.090 rows=0 loops=1)
   -  Index Scan using val_int_data_key on val_int v  
(cost=0.00..1378.86 rows=500 width=4) (actual time=0.074..0.080 rows=1 loops=1)
 Index Cond: ((data = $2) AND (data = $3))
   -  Index Scan using sid_pkey on sid s  (cost=0.00..3.01 rows=1 width=52) 
(actual time=0.061..0.066 rows=1 loops=1)
 Index Cond: (outer.sid = s.id)
 Total runtime: 2932.013 ms
(14 rows)

And running it as a simple query shows it much faster, 72 ms.

felix= EXPLAIN ANALYZE SELECT s.data, g.key, g.val, g.sid FROM key k, val_int 
v, sid s, glue_int g WHERE (k.data = 'mthNQFrmVs3Q4bVruCxIAGy' AND k.id = 
g.key) AND (v.data = 1973028023 AND v.data = 1973028223) AND v.id = g.val AND 
g.sid = s.id;
QUERY 
PLAN

Re: [GENERAL] Asking advice on speeding up a big table

2006-04-16 Thread felix-accts-pgsql
On Sat, Apr 15, 2006 at 10:31:26AM -0400, Francisco Reyes wrote:
 [EMAIL PROTECTED] writes:
 
 Usage is to match data from the key and val tables to fetch the data
 value from the sid table.
 
 What is the relation between key and val tables?
 Will key.id and val.id be equal?

This benchmark explores an idea for a simple berkeley-db-like lookup,
but faster and allowing ranges, and specialized for looking up info in
some other tables I have.  The key table data is table.column, and
1000 is a rough guess on how many unique column names there might be.
The val table is the contents of those columns, and 100K is nother
rough guess.  The end result, the sid table, is a generic ID I have,
coudl be anything, like lat/lon, room-bookshelf-shelf-book, etc.

key.id and val.id have no bearing on each other.

I have made some minor changes and speeded things up to around 15-20
lookups/sec, good enough, but not exciting :-) and in the process,
come across some odd misbehavior.  I have a writeup, almost ready to
post, but I want to make sure I cxross my Is and dot my Ts properly.

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / [EMAIL PROTECTED]
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Query runs fast or slow

2006-04-16 Thread felix-accts-pgsql
I have a benchmark test which runs a query very slowly under certain
circumstances.  I used Ethereal to capture the packet traffic, and
also enabled debug5 logging out of curiousity.  While the slow query
is in progress, there is no log or packet activity, but the cpu is
busy.  These packets are below; look for SLOW PACKETS HERE to skip all
this explanatory drudge.

This WHERE clause is fast under all conditions:

... AND (val.data = $2) AND ...

This WHERE clause is fast as a simple query, but is excruciatingly
slow as prepare / execute / fetch:

... AND (val.data  $2 AND val.data  $3) AND ...

My test program is in Perl and uses DBI/DBD::Pg.  Postgresql version
is 8.0.3 on a dual core dual opteron with 2G of RAM.  DBI is version
1.48.  DBD::Pg is version 1.42.  The OS is rPath Linux 2.6.15.

The test runs each SQL statement three times, first as a simple query
to preload caches, then as prepare / execute / fetch, and lastly as a
simple query again.

$sth = $dbh-prepare(sql_with_placeholders);
$dbh-selectall_arrayref(sql_with_values_substituted);
$sth-execute(@values);
$sth-fetchall_arrayref();
$dbh-selectall_arrayref(sql_with_values_substituted);

I captured packet traffic and tailed the log while these were running.

Everything is fine except one query, which took 75 seconds to run,
when the others took 3 milliseconds.  During this 75 seconds, the
postmaster log showed no activity, but top showed the postmaster quite
busy.

75 seconds!  That's an eternity.  I can't imagine any circumstances
where it makes sense.  EXPLAIN ANALYZE doesn't show the slow timing
because it requires values, not $n placeholders, and it is the prepare
/ execute operation which is so slow.  I will be glad to send the log,
the packet capture file, the test program itself, and anything else
which helps.  Here are the EXPLAIN statements in case it helps.

EXPLAIN for the equality WHERE clause:

felix= explain analyze SELECT sid.data, glue.key, glue.val, glue.sid FROM 
key, val, sid, glue WHERE (key.data = 
'x6ATArB_k1cgLp1mD5x2nzVVf2DQw4Lw1-Ow5NCzzs5Pupg6K' AND key.id = glue.key) AND 
(val.data = 357354306) AND val.id = glue.val AND glue.sid = sid.id;
   QUERY 
PLAN   


 Nested Loop  (cost=5.82..1119.29 rows=1 width=60) (actual 
time=2.271..36.184 rows=1 loops=1)
   -  Hash Join  (cost=5.82..1116.27 rows=1 width=16) (actual 
time=2.079..35.976 rows=1 loops=1)
 Hash Cond: (outer.key = inner.id)
 -  Nested Loop  (cost=0.00..1105.43 rows=1001 width=16) (actual 
time=0.315..31.820 rows=1000 loops=1)
   -  Index Scan using val_data_key on val  (cost=0.00..6.01 
rows=1 width=4) (actual time=0.119..0.123 rows=1 loops=1)
 Index Cond: (data = 357354306)
   -  Index Scan using glue_val_idx on glue  
(cost=0.00..702.58 rows=31747 width=16) (actual time=0.181..24.438 rows=1000 
loops=1)
 Index Cond: (outer.id = glue.val)
 -  Hash  (cost=5.82..5.82 rows=1 width=4) (actual 
time=0.292..0.292 rows=0 loops=1)
   -  Index Scan using key_data_key on key  (cost=0.00..5.82 
rows=1 width=4) (actual time=0.266..0.271 rows=1 loops=1)
 Index Cond: (data = 
'x6ATArB_k1cgLp1mD5x2nzVVf2DQw4Lw1-Ow5NCzzs5Pupg6K'::text)
   -  Index Scan using sid_pkey on sid  (cost=0.00..3.01 rows=1 width=52) 
(actual time=0.179..0.183 rows=1 loops=1)
 Index Cond: (outer.sid = sid.id)
 Total runtime: 37.880 ms
(14 rows)

EXPLAIN for the range WHERE clause:

felix= explain analyze SELECT sid.data, glue.key, glue.val, glue.sid FROM 
key, val, sid, glue WHERE (key.data = 'kOSkZ5iN6sz-KqGo51aTwqZnvCKQRUH2SZ8k' 
AND key.id = glue.key) AND (val.data  183722006 AND val.data  183722206) AND 
val.id = glue.val AND glue.sid = sid.id;
  QUERY 
PLAN   

---
 Nested Loop  (cost=5.82..1119.30 rows=1 width=60) (actual 
time=15.016..15.525 rows=1 loops=1)
   -  Hash Join  (cost=5.82..1116.27 rows=1 width=16) (actual 
time=14.879..15.374 rows=1 loops=1)
 Hash Cond: (outer.key = inner.id)
 -  Nested Loop  (cost=0.00..1105.43 rows=1001 width=16) (actual 
time=0.211..11.666 rows=1000 loops=1)
   -  Index Scan using val_data_key on val  (cost=0.00..6.01 
rows=1 width=4) (actual time=0.071..0.076 rows=1 loops=1)
 Index Cond: ((data  183722006) AND (data

Re: [GENERAL] Query runs fast or slow

2006-04-16 Thread felix
On Sun, Apr 16, 2006 at 04:32:25PM -0400, Tom Lane wrote:

 To analyze the plan used for a parameterized query, try
 
   PREPARE foo(...) AS SELECT ... $n ...
 
   EXPLAIN ANALYZE EXECUTE foo(...)
 
 But I already know what you're going to find: the planner's estimates
 for the range query are not going to be very good when it has no idea
 what the range bounds are.  This is a situation where it may be best
 to absorb the hit of re-planning each time instead of using a generic
 parameterized plan.

I will try this Monday, but isn't 75 seconds an awful long time?  It
almost seems like even the worst plan could find records faster than
that, and if it were actually scanning everything sequentially, there
would be a fair amount of variation, say 25 seconds, 50 seconds, 100
seconds.  The most I have seen is a range of, say, 75-77.  That just
seems way too slow.

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / [EMAIL PROTECTED]
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Asking advice on speeding up a big table

2006-04-11 Thread felix
On Tue, Apr 11, 2006 at 09:52:40AM +0200, hubert depesz lubaczewski wrote:
 On 4/10/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

  What I was hoping for was some general insight from the EXPLAIN
  ANALYZE, that maybe extra or different indices would help, or if there
  is some better method for finding one row from 100 million.  I realize
  I am asking a vague question which probably can't be solved as
  presented.
 
 
 hmm .. perhaps you can try to denormalize the table, and then use
 multicolumn indices?

That's an idea ... I had thought that since my two referenced tables
are small in comparison to the third table, that wouldn't be of any
use, but I will give it a try.  Thanks ...

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / [EMAIL PROTECTED]
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Asking advice on speeding up a big table

2006-04-10 Thread felix
I have a simple benchmark which runs too slow on a 100M row table, and
I am not sure what my next step is to make it faster.

It's a simple setup, part of a larger system.  There are three data
tables, each with a BIGINT id and a data column of dofferent types.
There is a fourth table with BIGINT foreign key references to the
other three tables' id columns.

felix= \d key
Table oddtimes.key
 Column |  Type  | Modifiers 
++---
 id | bigint | not null
 data   | text   | not null
Indexes:
key_pkey PRIMARY KEY, btree (id)
key_data_key UNIQUE, btree (data)

felix= \d val
 Table oddtimes.val
 Column |  Type   | Modifiers 
+-+---
 id | bigint  | not null
 data   | integer | not null
Indexes:
val_pkey PRIMARY KEY, btree (id)
val_data_key UNIQUE, btree (data)

felix= \d sid
Table oddtimes.sid
 Column | Type  | Modifiers 
+---+---
 id | bigint| not null
 data   | character(40) | not null
Indexes:
sid_pkey PRIMARY KEY, btree (id)

felix= \d glue
Table oddtimes.glue
 Column |  Type  | Modifiers 
++---
 key| bigint | 
 val| bigint | 
 sid| bigint | 
Indexes:
glue_key_idx btree (key)
glue_key_val_idx btree (key, val)
glue_val_idx btree (val)
glue_val_key_idx btree (val, key)
Foreign-key constraints:
glue_key FOREIGN KEY (key) REFERENCES key(id)
glue_val FOREIGN KEY (val) REFERENCES val(id)
glue_sid FOREIGN KEY (sid) REFERENCES sid(id)

Usage is to match data from the key and val tables to fetch the data
value from the sid table.  It's sort of a glorified Berkeley db, but
you can do compare ranges, not just exact matches.  If I can make it
fast enough, I may add two more types, date and text.

The key table has 1K rows, val has 100K, and sid and glue have 100M
rows.  They take about 31G space, last time I checked.

 felix= SELECT relname, relpages FROM pg_class ORDER BY relpages DESC 
LIMIT 20;
  relname | relpages 
 -+--
  sid |  1086957
  glue|   735295
  glue_key_val_idx|   385042
  glue_val_key_idx|   385042
  sid_pkey|   274194
  glue_key_idx|   274194
  glue_val_idx|   274194
  val |  589
  val_pkey|  382
  val_data_key|  283
 

My benchmark times SQL matches like this.  This example was a repeat
and has reasonable speed.  But fresh values take around 3-4 seconds.
This is 8.0.3 on a dual Opteron dual core machine with only 2G RAM (it
is meant for compute intensive work and was idle while I ran these
tests).  The disks are plain vanilla IDE, maybe SATA, but nothing at
all special.

felix= explain analyze SELECT sid.data, val.data FROM key, val, sid, glue 
WHERE key.data = 'UajzAQjTJPevVJBuuerjU4pcl8eJcyrIxzkC' AND key.id = glue.key 
AND val.data = 1984186373 AND val.id = glue.val AND glue.sid = sid.id;
   QUERY 
PLAN   


 Nested Loop  (cost=5.84..4480.89 rows=1 width=48) (actual 
time=32.157..32.157 rows=0 loops=1)
   -  Hash Join  (cost=5.84..4477.87 rows=1 width=12) (actual 
time=32.149..32.149 rows=0 loops=1)
 Hash Cond: (outer.key = inner.id)
 -  Nested Loop  (cost=0.00..4467.01 rows=1001 width=20) (actual 
time=0.205..28.304 rows=1000 loops=1)
   -  Index Scan using val_data_key on val  (cost=0.00..6.01 
rows=1 width=12) (actual time=0.059..0.066 rows=1 loops=1)
 Index Cond: (data = 1984186373)
   -  Index Scan using glue_val_idx on glue  
(cost=0.00..4447.15 rows=1108 width=24) (actual time=0.131..20.670 rows=1000 
loops=1)
 Index Cond: (outer.id = glue.val)
 -  Hash  (cost=5.84..5.84 rows=1 width=8) (actual 
time=0.123..0.123 rows=0 loops=1)
   -  Index Scan using key_data_key on key  (cost=0.00..5.84 
rows=1 width=8) (actual time=0.100..0.105 rows=1 loops=1)
 Index Cond: (data = 
'UajzAQjTJPevVJBuuerjU4pcl8eJcyrIxzkC'::text)
   -  Index Scan using sid_pkey on sid  (cost=0.00..3.01 rows=1 width=52) 
(never executed)
 Index Cond: (outer.sid = sid.id)
 Total runtime: 32.293 ms
(14 rows)


I realize these indices and tables take

Re: [GENERAL] Asking advice on speeding up a big table

2006-04-10 Thread felix
On Mon, Apr 10, 2006 at 02:51:30AM -0400, Tom Lane wrote:
 [EMAIL PROTECTED] writes:
  I have a simple benchmark which runs too slow on a 100M row table, and
  I am not sure what my next step is to make it faster.
 
 The EXPLAIN ANALYZE you showed ran in 32 msec, which ought to be fast
 enough for anyone on that size table.  You need to show us data on the
 problem case ...

It is, but it is only 32 msec because the  query has already run and
cached the useful bits.  And since I have random values, as soon as I
look up some new values, they are cached and no longer new.

What I was hoping for was some general insight from the EXPLAIN
ANALYZE, that maybe extra or different indices would help, or if there
is some better method for finding one row from 100 million.  I realize
I am asking a vague question which probably can't be solved as
presented.

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / [EMAIL PROTECTED]
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] ALTER TABLE -- how to add ON DELETE CASCADE?

2006-03-07 Thread felix
I have table A with a column AA which references table B's primary
key BB, and I want to alter column AA to delete on cascade.

ALTER TABLE A COLUMN AA ADD CONSTRAINT DELETE ON CASCADE

is what I tried with a zillion variations, all reporting syntax
errors.

\h alter table seems to be missing any way to add a constraint to a
column.

Or maybe this isn't a constraint ... does seem like maybe the wrong
name, but I can't think what else it would be called, and the other
choices shown with \h seem even less likely.

Your assiatnce is much appreciated.  I would gladly send you some
Oreos or Ginger Nuts :-)

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / [EMAIL PROTECTED]
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] ALTER TABLE -- how to add ON DELETE CASCADE?

2006-03-07 Thread felix
On Tue, Mar 07, 2006 at 05:36:37PM -0500, Tom Lane wrote:
 [EMAIL PROTECTED] writes:
  ALTER TABLE A COLUMN AA ADD CONSTRAINT DELETE ON CASCADE
 
 You're missing the specification of the foreign key, not to mention
 spelling the CASCADE clause backwards.  Try
 
 ALTER TABLE A ADD FOREIGN KEY(AA) REFERENCES B(BB) ON DELETE CASCADE

Got it right in the Subject: and my many attempts, just not in the
body :-)

The column already had the foreign key, I never thought to add it
again.  I was only thinking of modifying the minimum necessary.

Thanks.

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / [EMAIL PROTECTED]
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] unsubscribe pgsql-general

2005-11-17 Thread felix
On Thu, Nov 17, 2005 at 09:53:26AM -0800, Peter Atkins wrote:
 unsubscribe pgsql-general

O dear ... I haven't posted this in a while :-)

Here's how to unsubscribe:

First, ask your Internet Provider to mail you an Unsubscribing Kit.
Then follow these directions.

The kit will most likely be the standard no-fault type. Depending on
requirements, System A and/or System B can be used. When operating
System A, depress lever and a plastic dalkron unsubscriber will be
dispensed through the slot immediately underneath. When you have
fastened the adhesive lip, attach connection marked by the large X
outlet hose. Twist the silver-coloured ring one inch below the
connection point until you feel it lock.

The kit is now ready for use. The Cin-Eliminator is activated by the
small switch on the lip.  When securing, twist the ring back to its
initial condition, so that the two orange lines meet.  Disconnect.
Place the dalkron unsubscriber in the vacuum receptacle to the rear.
Activate by pressing the blue button.

The controls for System B are located on the opposite side. The red
release switch places the Cin-Eliminator into position; it can be
adjusted manually up or down by pressing the blue manual release
button. The opening is self-adjusting. To secure after use, press the
green button, which simultaneously activates the evaporator and
returns the Cin-Eliminator to its storage position.

You may log off if the green exit light is on over the evaporator.  If
the red light is illuminated, one of the Cin-Eliminator requirements
has not been properly implemented. Press the List Guy call button on
the right of the evaporator. He will secure all facilities from his
control panel.

To use the Auto-Unsub, first undress and place all your clothes in the
clothes rack. Put on the velcro slippers located in the cabinet
immediately below. Enter the shower, taking the entire kit with
you. On the control panel to your upper right upon entering you will
see a Shower seal button. Press to activate. A green light will then
be illuminated immediately below. On the intensity knob, select the
desired setting. Now depress the Auto-Unsub activation lever. Bathe
normally.

The Auto-Unsub will automatically go off after three minutes unless
you activate the Manual off override switch by flipping it up. When
you are ready to leave, press the blue Shower seal release
button. The door will open and you may leave. Please remove the velcro
slippers and place them in their container.

If you prefer the ultrasonic log-off mode, press the indicated blue
button. When the twin panels open, pull forward by rings A  B. The
knob to the left, just below the blue light, has three settings, low,
medium or high. For normal use, the medium setting is suggested.

After these settings have been made, you can activate the device by
switching to the ON position the clearly marked red switch. If
during the unsubscribing operation you wish to change the settings,
place the manual off override switch in the OFF position. You may
now make the change and repeat the cycle. When the green exit light
goes on, you may log off and have lunch. Please close the door behind
you.


-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / [EMAIL PROTECTED]
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Generic Q about max(id) vs ORDER BY ID DESC LIMIT 1

2005-10-24 Thread felix
Having been surprised a few times myself by EXPLAIN showing a
sequential scan instead of using an index, and having seen so many
others surprised by it, I hope I am not asking a similar question.

We recently upgraded our db servers, both old and new running 8.0, and
one casualty was forgetting to add the nightly VACUUM ANALYZE.
Inserts were down to 7-8 seconds apiece, but are now back to normal
under a second since the tables were vacuumed.

However, in the process of investigating this, my boss found something
which we do not understand.  A table with a primary key 'id' takes 200
seconds to SELECT MAX(id), but is as close to instantaneous as you'd
want for SELECT ID ORDER BY ID DESC LIMIT 1.  I understand why
count(*) has to traverse all records, but why does MAX have to?  This
table has about 750,000 rows, rather puny.

I suspect there is either a FAQ which I missed, or no one can answer
without EXPLAIN printouts.  I'm hoping there is some generic answer to
something simple I have overlooked.

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / [EMAIL PROTECTED]
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Generic Q about max(id) vs ORDER BY ID DESC LIMIT 1

2005-10-24 Thread felix
Dang, that's a lot of answer! :-) and not what I was hoping for.  Max
and count both have to look up data records to skip values associated
with other transactions.  But count, by definition, has to scan every
single record from one end of the index to the other, so the index is
useless, whereas max will probably scan only a very few records before
finding the first valid one.

I can't see any difference between these two statements:

SELECT MAX(id) FROM table;
SELECT id FROM table ORDER BY id DESC LIMIT 1;

If the planner / optimizer / whatever doesn't optimize them to the
same end result, is there a reason not to?  Is there a case for
putting it on the TODO list?

In case it is any help, here is the EXPLAIN ANALYZE results:

EXPLAIN ANALYZE SELECT id FROM transaction ORDER BY id DESC LIMIT 1;

QUERY PLAN

 Limit  (cost=0.00..1.98 rows=1 width=4) (actual time=22.482..22.485
rows=1 loops=1)
   -  Index Scan Backward using transaction_pkey on transaction
(cost=0.00..1944638.42 rows=984531 width=4) (actual
time=22.474..22.474
rows=1 loops=1)
 Total runtime: 22.546 ms
(3 rows)



EXPLAIN ANALYZE SELECT MAX(id) FROM transaction;
  QUERY PLAN
---
 Aggregate  (cost=52745.64..52745.64 rows=1 width=4) (actual
time=11500.994..11500.998 rows=1 loops=1)
   -  Seq Scan on transaction  (cost=0.00..50284.31 rows=984531
width=4) (actual time=57.164..8676.015 rows=738952 loops=1)
 Total runtime: 11501.096 ms

And that's a good one - I've seen it take as long as 20 ms...



-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / [EMAIL PROTECTED]
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Generic Q about max(id) vs ORDER BY ID DESC LIMIT 1 -- SOLVED

2005-10-24 Thread felix
On Mon, Oct 24, 2005 at 07:14:43PM -0400, Alex Turner wrote:
 I believe based on semi-recent posts that MIN and MAX are now treated
 as special cases in 8.1, and are synonymous with select id order by id
 desc limit 1 etc..

Aha!  I looked it up in the release notes, you are right.  I had never
thought they would not be special cased.

Thanks.

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / [EMAIL PROTECTED]
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Oracle buys Innobase

2005-10-08 Thread felix
On Sat, Oct 08, 2005 at 10:31:30AM -0500, Scott Marlowe wrote:

 What it comes down to is this.  MySQL is dual licensed.  You can use
 the GPL version, or the commercial version.  In order to sell the
 commercially licensed version, MySQL must have the rights to all the
 code in their base.  So, in order for MySQL to sell a commercail
 version of MySQL with innodb support, they have to pay innobase a
 bit to include it, or rip it out.

I don't understand.  If both MySQL and Innodb are GPL licensed,
commercial or not should make no difference, and they can add all the
GPL changes they want o the last Innodb GPL release.

What am I missing?

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / [EMAIL PROTECTED]
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Oracle buys Innobase

2005-10-08 Thread felix
On Sat, Oct 08, 2005 at 02:11:54PM -0700, [EMAIL PROTECTED] wrote:
 
 What am I missing?

[ many answers ]

Ahhh ... I did not realize they were selling a commercial version with
a dual license.  I had thought they were selling support contracts.

I confess I find this weird too.  I can't see why someone wouild want
to distribute their own private label version of MySQL, unless they
were making significant changes, and then I can't see why anyone would
want to buy such a version.  But I have met many people, not just
corporate types, who think $0 = worthless, and $$ not as good as
$$, even for the exact same piece of gear.

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / [EMAIL PROTECTED]
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] unsubscribe

2005-08-25 Thread felix
On Thu, Aug 25, 2005 at 07:30:07AM +0800, Richard Sydney-Smith wrote:
 unsubscribe

Here's how to unsubscribe:

First, ask your Internet Provider to mail you an Unsubscribing Kit.
Then follow these directions.

The kit will most likely be the standard no-fault type. Depending on
requirements, System A and/or System B can be used. When operating
System A, depress lever and a plastic dalkron unsubscriber will be
dispensed through the slot immediately underneath. When you have
fastened the adhesive lip, attach connection marked by the large X
outlet hose. Twist the silver-coloured ring one inch below the
connection point until you feel it lock.

The kit is now ready for use. The Cin-Eliminator is activated by the
small switch on the lip.  When securing, twist the ring back to its
initial condition, so that the two orange lines meet.  Disconnect.
Place the dalkron unsubscriber in the vacuum receptacle to the rear.
Activate by pressing the blue button.

The controls for System B are located on the opposite side. The red
release switch places the Cin-Eliminator into position; it can be
adjusted manually up or down by pressing the blue manual release
button. The opening is self-adjusting. To secure after use, press the
green button, which simultaneously activates the evaporator and
returns the Cin-Eliminator to its storage position.

You may log off if the green exit light is on over the evaporator.  If
the red light is illuminated, one of the Cin-Eliminator requirements
has not been properly implemented. Press the List Guy call button on
the right of the evaporator. He will secure all facilities from his
control panel.

To use the Auto-Unsub, first undress and place all your clothes in the
clothes rack. Put on the velcro slippers located in the cabinet
immediately below. Enter the shower, taking the entire kit with
you. On the control panel to your upper right upon entering you will
see a Shower seal button. Press to activate. A green light will then
be illuminated immediately below. On the intensity knob, select the
desired setting. Now depress the Auto-Unsub activation lever. Bathe
normally.

The Auto-Unsub will automatically go off after three minutes unless
you activate the Manual off override switch by flipping it up. When
you are ready to leave, press the blue Shower seal release
button. The door will open and you may leave. Please remove the velcro
slippers and place them in their container.

If you prefer the ultrasonic log-off mode, press the indicated blue
button. When the twin panels open, pull forward by rings A  B. The
knob to the left, just below the blue light, has three settings, low,
medium or high. For normal use, the medium setting is suggested.

After these settings have been made, you can activate the device by
switching to the ON position the clearly marked red switch. If
during the unsubscribing operation you wish to change the settings,
place the manual off override switch in the OFF position. You may
now make the change and repeat the cycle. When the green exit light
goes on, you may log off and have lunch. Please close the door behind
you.


-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / [EMAIL PROTECTED]
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] 7.3.2 closing connections, sometimes

2005-07-06 Thread felix
I hate to post as vague a description as this, but I don't think the
devil is in the details this time.  I may be wrong ...

This project is running 7.3.2 on a RedHat 9 system.  We plan to
upgrade in a few weeks to Fedora Core and Postgres 8, so maybe this
problem is not worth wasting too much time on, right now.

This is a SOAP server, Apache with mod_perl, connecting to Postgres
via DBI/DBD::Pg.  Sometimes it gets in a mood, for want of a better
term, where a specific SQL statement fails with the good ole message
server closed the connection unexpectedly.  It will fail like this
for several hours, then suddenly start working again.  The SQL that it
fails on works perfectly in psql; it always returns the exact data
expected.  It's a small table of perhaps a dozen lines, and does not
change very often.  I would suspect hardware except that a new machine
behaves just the same.

One of the puzzles is that nothing shows up in the log.  The log is
configured thusly:

server_min_messages = notice
client_min_messages = notice
log_min_error_statement = error

And yet only the only messages that show up are start and stop.  I
changed log_min_error_statement to notice like the others, and it
hasn't failed since, but I doubt this is the cause, because it has
gone thru these mood swings before without having changed the log
level.  It's not the soap server disconnecting from the SOAP client,
because the server continues to log things.

Did 7.3.2 have any problems that might cause random disconnects, or
diconnects for some obscure but documented reason?  Google found some,
but none of them apply here, as far as I can tell.

Or are there useful changes to logging that might track this down?
Strace generated about 50MB of log file, too much for me!

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / [EMAIL PROTECTED]
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] 7.3.2 closing connections, sometimes

2005-07-06 Thread felix
On Wed, Jul 06, 2005 at 02:32:31PM -0400, Bruce Momjian wrote:
 
 This message is from the backend exiting abruptly.  Is isn't an ERROR
 as we define it for logging purposes.  That's why there is nothing in
 the logs.  I recommend turning on log_statement which prints before the
 query is run.

I hadn't thought of the error that way.  I do have query logging on,
and if I run that query directly, it finds the data I'd expect.  It's
a small table, or really three of them, all small for the time being.

select it.id, it.it_class_id, it.it_code_version_id,
it.it_data_version, it.note, it_class.class, it_class.id,
it_code_version.version, it_code_version.id, it_class.id,
it_code_version.id from it join it_class on (it_class.id =
it.it_class_id) join it_code_version on (it_code_version.id =
it.it_code_version_id) where class = ? AND version = ? AND
it_data_version  ?

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / [EMAIL PROTECTED]
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] 7.3.2 closing connections, sometimes

2005-07-06 Thread felix
On Wed, Jul 06, 2005 at 03:10:40PM -0400, Tom Lane wrote:
  [EMAIL PROTECTED] wrote:
  This is a SOAP server, Apache with mod_perl, connecting to Postgres
  via DBI/DBD::Pg.  Sometimes it gets in a mood, for want of a better
  term, where a specific SQL statement fails with the good ole message
  server closed the connection unexpectedly.  It will fail like this
 
 The specific statement being what exactly?

select it.id, it.it_class_id, it.it_code_version_id,
it.it_data_version, it.note, it_class.class, it_class.id,
it_code_version.version, it_code_version.id, it_class.id,
it_code_version.id from it join it_class on (it_class.id =
it.it_class_id) join it_code_version on (it_code_version.id =
it.it_code_version_id) where class = ? AND version = ? AND
it_data_version  ?

 Bruce Momjian pgman@candle.pha.pa.us writes:
  This message is from the backend exiting abruptly.  Is isn't an ERROR
  as we define it for logging purposes.  That's why there is nothing in
  the logs.
 
 Nonetheless I'd expect there to be at least a postmaster complaint about
 a crashed backend --- assuming that that's what's going on.  Do the
 other active connections get forcibly closed when this happens?

Haven't had any others open, it's a dev system.  But I'll try leaving
a psql session open.  Right now it's gotten itself into the mood of
always working, so it might have to wait a while.

Could a corrupt db cause these mood swings?  And if so, would that
persist even across dropdb / creatdb?

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / [EMAIL PROTECTED]
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] 7.3.2 closing connections, sometimes

2005-07-06 Thread felix
On Wed, Jul 06, 2005 at 05:44:44PM -0400, Bruce Momjian wrote:
 [EMAIL PROTECTED] wrote:
  
  Could a corrupt db cause these mood swings?  And if so, would that
  persist even across dropdb / creatdb?
 
 Yes, that is possible, but usually it would fail consistently.  Have you
 run memtest and disk diagnostics?

I moved the disks to a new machine, same problem, which doesn't rule
out disk problems.  We were getting a second machine ready for testing
this problem, but my boss has decided to upgrade to 8.0.3 tonight for
himself, and probably very soon after for the rest of us, and the
problem is in the work mood right now, so we will no doubt follow the
general principle of changing many things at once to make tracking
things down more fun :-)

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / [EMAIL PROTECTED]
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Exception Handling in C-Language Functions?

2005-05-21 Thread Felix E. Klee
I have the created a C-Language function (code is below).  Now, I
wonder: How do I handle exceptions, for example if malloc cannot assign
the necessary memory?  Do palloc and pfree handle such a case
cleanly?  Should I simply use an assert?

#include postgres.h
#include string.h
#include stdlib.h
#include fmgr.h
#include libinn.h

PG_FUNCTION_INFO_V1(ffiinews_uwildmat);

/* Wrapper for INN's function uwildmat.  Needs parameters in UTF-8. */
Datum ffiinews_uwildmat(PG_FUNCTION_ARGS) {
VarChar *text = PG_GETARG_VARCHAR_P(0);
VarChar *pattern = PG_GETARG_VARCHAR_P(1);
int text_len = VARSIZE(text)-VARHDRSZ;
int pattern_len = VARSIZE(pattern)-VARHDRSZ;
char *tmp_text = (char *)malloc(text_len+1);
if (tmp_text == NULL)
; /* What now? */
char *tmp_pattern = (char *)malloc(pattern_len+1);
if (tmp_pattern == NULL)
; /* What now? */
strncpy(tmp_text, VARDATA(text), text_len);
tmp_text[text_len] = '\0';
strncpy(tmp_pattern, VARDATA(pattern), pattern_len);
tmp_pattern[pattern_len] = '\0';
bool matches = uwildmat(tmp_text, tmp_pattern);
free(tmp_pattern);
free(tmp_text);
PG_RETURN_BOOL(matches);
}

-- 
Felix E. Klee

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Exception Handling in C-Language Functions?

2005-05-21 Thread Felix E. Klee
At Sat, 21 May 2005 20:13:13 +1000,
Neil Conway wrote:
  Do palloc and pfree handle such a case cleanly?
 
 Yes -- they will roll back the current transaction on if there is no 
 memory available. 

Great.  So I just use those.

 You can catch the error via PG_TRY() in 8.0 [...]

Nice, but we're not deploying 8.0.  However, at the moment, out of
memory exceptions seem to be the only ones that I've to deal with.

-- 
Felix E. Klee

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Exception Handling in C-Language Functions?

2005-05-21 Thread Felix E. Klee
At Sat, 21 May 2005 10:30:47 -0400,
Tom Lane wrote:
 BTW, a more future-proof way of doing what you want:
 
  [...]
 
 is to let the varchar output routine do it:
 
  Datum text_datum = PG_GETARG_DATUM(0);
  char *text;
 
  text = DatumGetCString(DirectFunctionCall1(varcharout, text_datum));
 
 This avoids assuming that you know the internal representation of
 varchar (and if you think that's frozen for eternity, you haven't been
 reading the discussions of ramping up our locale support...)

Thanks for all those hints!  I've now adapted my code as you recommended
and it works fine, although I'm not certain what DirectFunctionCall1 and
DatumGetCString actually do (it seems that the only documentation of
these functions is in the source code, and I'm too lazy to sift through
it at the moment).

-- 
Felix E. Klee

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Postgresql, SQL server and Oracle. Please, help

2005-01-25 Thread felix
On Tue, Jan 25, 2005 at 12:35:50PM -0500, Nadia Kunkov wrote:

 Hi, I was wondering where can I get documentation and especially examples on 
 PgSQL?  I've done a lot of work with Ms SQL server and need to rewrite a 
 number of stored procedures in PgSQL.  Is there a tutorial?  What is the best 
 way to start?
 Can I read an Oracle book?  How close is Oracle to Postgresql?

Not specifically what you want, but there's a book with the
imaginative name of SQL by Chris Fehily, Peachpit Press, ISBN
0-201-11803-0, meant as a beginner's guide, but which also includes
all the variations in SQL for PostgreSQL, Oracle, and MS SQL Server.
That is why I got it, and it is pretty handy for me, a non-guru.

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / [EMAIL PROTECTED]
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] off Topic: Looking for a good sql query

2004-11-25 Thread felix
On Fri, Nov 26, 2004 at 01:19:59PM +1100, Jamie Deppeler wrote:

 This post is a bit off topic im looking a good sql book can someone give 
 me a recommendation?

I am a middling SQL guy; I have used functions, triggers, outer joins,
etc, but not often.  But I've been doing that for years :-) my
personal favorite is SQL Visual Quickstart Guide, Chris Fehily,
Peachpit Press, 2002, ISBN 0-201-11803-0.  It has lots of examples, it
remains steadily boring and consistent throughout, it goes from really
basic stuff to pretty fancy stuff, and most precious of all, it covers
five databases and tells you exactly how every single example differs
from db to db --

MS Access
MS SQL Server
MySQL
Oracle
PostgreSQL

If I only had one SQL book, this would be it.

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / [EMAIL PROTECTED]
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Dropping schemas and illegal seek -- MEA CUPLA

2004-06-07 Thread felix-lists
In article [EMAIL PROTECTED], Felix Finch [EMAIL PROTECTED] writes:

 I have a perl test program ...  and began getting this complaint
 from psql:

 Can't drop schemas: Illegal seek

Ahhh yesss... otherwise known as the subroutine not returning any
specific value and the caller expecting an error code, which explains
why adding debugging code changed it, running it under the debugger
changed it, and just about everything I did changed it, on a random
hit or miss basis.

My apologies for being too quick on the draw :-(

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / [EMAIL PROTECTED]
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Clustering for performance and fail over

2003-10-23 Thread James Felix Black
Hi, Stan,

We're in the early stages of testing a new Postgres (7.3) cluster.  For 
background, our database is about 14gb on disk, and we see about a 
transaction a second (out of about 120 queries/sec.)  Our application 
is a large dynamic Apache-based web system, written in Perl.  Our main 
database machine is a quad P4 Xeon (1.8ghz) with 4gb of RAM, running 
Linux 2.4.mumble; poorly formed queries and bad disk layout (we're 
working on it) mean that during times of peak traffic we'd see load 
sometimes up over 15.

For fail-over, we've been running the contrib/dbmirror single-master 
replication for about six months (in production) with no ill effects.  
We do reporting and db backup off of the slave machine, and it works 
great.  However, we project steady, linear growth in usage, and thus 
needed to find extra performance -- and it's not easy to get a higher 
performing shared-memory multiprocessor, to say nothing of cost.

As our system is pure Perl, we decided to replace the standard Perl 
database access layer with a custom, multiplexing, handle cache.  It's 
been running for about a week now and distributing the load flawlessly. 
 A bonus is that proxying the queries has allowed us to being to 
collect more interesting timing and usage statistics, and we're finally 
starting to hunt down and mercilessly improve our nastiest queries.

There are some refinements to the dbmirror that we're currently working 
on, but for now, everything is working flawlessly.

'jfb

C++: an octopus made by nailing extra legs onto a dog.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


[GENERAL] Permission denied while importing data from a file?

2000-04-09 Thread Felix Slager

LS,

i'm trying to copy data into a table using a copy command using:

COPY tabelName FROM '/somewhere/somebody/datafile.txt';

I've given everybody the right to use the datafile.txt (execute, read,
write) with an incredible chmod . Still i get the message:

ERROR:  COPY command, running in backend with effective uid nr 666,
could not open file '/somewhere/somebody/datafile.txt' for reading. 
Errno = Permission denied (13).

Why? I mean how does this happen?
Please help...

Felix



[GENERAL] Do transactions rollback by default?

2000-03-16 Thread felix

I am very much a self taught sql programmer, and I only use it for a
few small projects at home.  Work requires just a bit of db work on my
part, there are others who do the heavy lifting :-)

I was surprised when one of my heavy lifting co-workers told me that
when a transaction is begun, if the client disconnects (program bug,
computer crash, whatever) without doing an explicit commit or
rollback, the default in both Oracle and Sybase, and probably in every
SQL database, is to commit.  This seems completely backwards to me.
For instance, the example from Practical SQL Handbok of transferring
money from one account to another, you sure don't want any changes at
all if the client disconnects after having subtracted money from one
account but before having added that same amount to the second
account.

Could someone enlighten me here?  He seemed absolutely positive that
a disconnect is as good as a commit and always has been.

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / [EMAIL PROTECTED]
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o



Re: [GENERAL] Do transactions rollback by default?

2000-03-16 Thread felix

In article [EMAIL PROTECTED], Bruce Momjian 
[EMAIL PROTECTED] writes:

 Could someone enlighten me here?  He seemed absolutely positive that
 a disconnect is as good as a commit and always has been.

 Disconnect should abort the transaction.  It does in PostgreSQL, and I
 _hope_ every other database.

Thanks for all the replies, and saving my sanity.  The heavy lifter
confirms what some supposed, that he is only talking about the console
/ GUI front end.  He was guessing about programming based on that.

I tried psql, and it does NOT default to confirm, which is fine with
me.  I personally would find it confusing to have the front end
default to confirm on exit.  I'd rather have it behave the same as a
program.

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / [EMAIL PROTECTED]
  GPG = E987 4493 C860 246C 3B1E  6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o



[GENERAL] Question about Linux, threads, and Postgres

1999-01-31 Thread Felix Morley Finch

I've been running various versions of Postgres for several years now,
including one prototype web site for a customer.  I currently run
6.3.2 at home, and will upgrade to 6.4.2, but I have some questions.

I am programming on Linux 2.0, libc6, in C.

I am writing a server application to sit between the database and
clients.  I would like to thread it.  This is strictly a home
education application, so I don't mind experimenting.  I have written
some small threaded apps before, but nothing fancy.  In particular, I
have never used any "third party" libraries; it has all been my code
and general libc stuff.

The Linux thread documentation mentions problems using X libraries
which have been compiled without threading support, either designed-in
or even using the compile time flag _REENTRANT.  As long as only one
thread calls all non-threaded functions, errno itself is safe, because
all the threaded code uses a per-thread value, leaving the global
errno to the unthreaded code.  I would not be surprised to find there
are other global variables also abused this way :-)

Q 1: Does anyone have experience with Postgres in a threaded program?

Q 2: Is it necessary and/or sufficient to have only one thread talk to
 the Postgres library?

Q 3: Would it be good / bad to compile 6.4.2 with -D_REENTRANT, and
 would this help with Q#2?

I searched the archives, but variations on "thread" found nothing but
references to threaded mailing list archives :-)

I also saw the news about 6.5 going from table locks to no locks.
This sounds pretty exciting!  I think it will be fun to write some
test programs for that, leave my box thrashing on it for a while when
I go to work :-)  But I have to find the usual spare time first :-(

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / [EMAIL PROTECTED]
  PGP = 91 B3 94 7C E9 E8 76 2D   E1 63 51 AA A0 48 89 2F  ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o



Re: [GENERAL] Question about Linux, threads, and Postgres

1999-01-31 Thread Felix Morley Finch

In article [EMAIL PROTECTED], Adam Haberlach 
[EMAIL PROTECTED] writes:

 On Sat, Jan 30, 1999 at 09:08:14PM -0800, Felix Morley Finch wrote:

 Q 2: Is it necessary and/or sufficient to have only one thread talk to
 the Postgres library?

   Well, it would be sufficient--I am not sure if it is necessary.
 When I get around to cleaning up my hackish port of the client libs
 to BeOS, I will keep an eye out for things that could cause problems.

Hm.  I wonder.  If libpq doesn't know about per-thread errno, but
uses the global one, and my code thinks the per-thread errno is set,
this might cause problems.  I wonder if there are any other libc
variables which also have per-thread and global versions.

-- 
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
 Felix Finch: scarecrow repairman  rocket surgeon / [EMAIL PROTECTED]
  PGP = 91 B3 94 7C E9 E8 76 2D   E1 63 51 AA A0 48 89 2F  ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o