Re: [GENERAL] replacing single quotes
Ow Mun Heng wrote: Input is of form 'ppp','aaa','bbb' I want it to be stripped of quotes to become ppp,aaa,bbb escaping the quote would work but it means I will have to do some magic on the input as well to escape it prior to replacing it. select replace('AB\'A','\'','C') this works Can I buy a clue here? Sorry, all the advisives are free here. I'm a little confused that you think that you will have to escape single quotes in the input. What is your use case? Normally the input is in some variable in some programming language. Escaping single quotes is only for string literals. Like this: CREATE FUNCTION rep(v text) RETURNS text LANGUAGE sql IMMUTABLE STRICT AS $$SELECT replace($1, , '')$$; BTW, I would use '' instead of \' to escape single quotes. It is safer and standard compliant. Yours, Laurenz Albe ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] pgodbc + Excel + msquery + background refresh
Just wonder if anyone here uses Excel to connect to PG via ODBC. I'm using it extensively as my platform to get data from PG/MSSQL directly into excel. (Excel uses the msqry32.exe file which is like a stripped down sql query tool and returns data directly into excel) When using mssql, connecting from excel to mssql, I can get the query to run in the background. Hence, a long running query will not interfere with normal running of other excel works. Eg: Create new sheets, graphing etc. However, when trying to achieve the same thing using PG, somehow it either : 1. PG/PG_ODBC doesn't parse/handle the request to do the query in the background 2. I'm doing something wrong. I'm partial to #1 as it works find on mssql. Here's a sample query macro which you can stick into Excel. (alt-F11, Module, Insert-New-Modules) The Keyword here is Refresh BackgroundQuery = True Sub macro1() SQL = Select * from public.tablename limit 5000 With ActiveSheet.QueryTables.Add(Connection:=Array(Array(ODBC;DRIVER={PostgreSQL Unicode};DATABASE=public;SERVER=127.0.0.1;PORT=5432;UID=pguser;PWD=pguser)), Destination:=Range(A1)) .CommandText = SQL .Name = .FieldNames = True .RowNumbers = False .FillAdjacentFormulas = False .PreserveFormatting = True .RefreshOnFileOpen = False .BackgroundQuery = True .RefreshStyle = xlInsertDeleteCells .SavePassword = False .SaveData = True .AdjustColumnWidth = True .RefreshPeriod = 0 .PreserveColumnInfo = True .Refresh BackgroundQuery:=True End If End With End Sub I think this is like the last hurdle for me from moving from mssql to PG. Thanks and hopefully, there will be someone who uses it this way. ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] SLEEP in posgresql
What if its just SLEEP for 1 second. Why would it keep my stored procedure hanging ? On 10/10/07, Michal Taborsky - Internet Mall [EMAIL PROTECTED] wrote: Jasbinder Singh Bali napsal(a): Instead of that, I re-engineered my while loop in the stored procedure as follows. ... I was wondering how to implement the SLEEP functionality here. Hello. I can't comment the function itself, but I want to bring something else to your attention. Note, that the stored procedure is always run as a single transaction and by doing the sleep in it, it will probbly run for a long time, or maybe even forever. The problem is that Long running transactions are evil(tm) Postgres, and almost any real database engine for that matter, has problems when there are tansactions that run for a very long time. It prevents the cleanup of stale records, because the engine has to keep them around for this long running transaction. You might consider doing the actual work in the transaction, but the sleeping in between shoud be done outside. Note to PG developers: Is there any thought being given to have the PL/pgSQL scripting language outside the function body? Like Ora has? It would be perfect for this case and I remember more than a dozen times in last year when I could have used it and saved some PHP work (and network communiaction). -- Michal Táborský chief systems architect Internet Mall, a.s. http://www.MALL.cz
Re: [GENERAL] replacing single quotes
On Wed, 2007-10-10 at 09:11 +0200, Albe Laurenz wrote: Ow Mun Heng wrote: Input is of form 'ppp','aaa','bbb' I want it to be stripped of quotes to become ppp,aaa,bbb escaping the quote would work but it means I will have to do some magic on the input as well to escape it prior to replacing it. select replace('AB\'A','\'','C') this works Can I buy a clue here? Sorry, all the advisives are free here. I'm a little confused that you think that you will have to escape single quotes in the input. What is your use case? Normally the input is in some variable in some programming language. Escaping single quotes is only for string literals. Like this: CREATE FUNCTION rep(v text) RETURNS text LANGUAGE sql IMMUTABLE STRICT AS $$SELECT replace($1, , '')$$; BTW, I would use '' instead of \' to escape single quotes. It is safer and standard compliant. The input is for an SRF which accepts an array.. eg: select * from foo(date1,date2,'{aaa,bbb,ccc}') where the function goes.. create function foo(timestamp, timestamp, foo_list text[]) returns setof results as ... where foo_list = any (foo_list) .. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Upper and Lower-cased Database names?
I am finding out for the first time that by having a database created with the name: MyTest, I cannot do a simple query as follows: postgres=# select * from MyTest.public.cars; ERROR: cross-database references are not implemented: mytest.public.cars Notice, however since I created a cars table in the postgres database, I was able to do a query: postgres=# select * from postgres.public.cars ; carid | name | vendor | type ---+--++-- H1| Civic| Honda | FF N1| Skyline GT-R | Nissan | 4WD T1| Supra| Toyota | FR T2| MR-2 | Toyota | FF (4 rows) So the problem, it seems that mixed case database names might not be supported with pssql? I have a feeling that the default character set is SQL-ASCII and should be changed to something else? What might that be and how can I change/update the character-set (encoding)? Thanks! Dan No virus found in this outgoing message. Checked by AVG Free Edition. Version: 7.5.488 / Virus Database: 269.14.6/1060 - Release Date: 10/9/2007 4:43 PM ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Upper and Lower-cased Database names?
This error probably does not have anything to do with the case of your database name. Instead you probably logged into a database which is not MyTest, because it is not possible to log into a database x and make a query on database y. Christian Rengstl M.A. Klinik und Poliklinik für Innere Medizin II Kardiologie - Forschung Universitätsklinikum Regensburg B3 1.388 Franz-Josef-Strauss-Allee 11 93053 Regensburg Tel.: +49-941-944-7230 On Wed, Oct 10, 2007 at 10:05 AM, in message [EMAIL PROTECTED], Daniel B. Thurman [EMAIL PROTECTED] wrote: I am finding out for the first time that by having a database created with the name: MyTest, I cannot do a simple query as follows: postgres=# select * from MyTest.public.cars; ERROR: cross- database references are not implemented: mytest.public.cars Notice, however since I created a cars table in the postgres database, I was able to do a query: postgres=# select * from postgres.public.cars ; carid | name | vendor | type --- +-- + +-- H1| Civic| Honda | FF N1| Skyline GT- R | Nissan | 4WD T1| Supra| Toyota | FR T2| MR- 2 | Toyota | FF (4 rows) So the problem, it seems that mixed case database names might not be supported with pssql? I have a feeling that the default character set is SQL- ASCII and should be changed to something else? What might that be and how can I change/update the character- set (encoding)? Thanks! Dan No virus found in this outgoing message. Checked by AVG Free Edition. Version: 7.5.488 / Virus Database: 269.14.6/1060 - Release Date: 10/9/2007 4:43 PM --- (end of broadcast)--- TIP 2: Don't 'kill - 9' the postmaster ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Upper and Lower-cased Database names?
Daniel B. Thurman wrote: I am finding out for the first time that by having a database created with the name: MyTest, I cannot do a simple query as follows: postgres=# select * from MyTest.public.cars; ERROR: cross-database references are not implemented: mytest.public.cars Correct - a query takes place within a specific database. You want to connect to mytest and then issue your query. Notice, however since I created a cars table in the postgres database, I was able to do a query: postgres=# select * from postgres.public.cars ; carid | name | vendor | type ---+--++-- H1| Civic| Honda | FF N1| Skyline GT-R | Nissan | 4WD T1| Supra| Toyota | FR T2| MR-2 | Toyota | FF (4 rows) You're logged in to the postgres database, and you're querying the postgres database. The query is equivalent to: SELECT * FROM public.cars; or, assuming the public schema is in your search_path: SELECT * FROM cars; If you were logged in to a different database your query would fail with the same error as previously. So the problem, it seems that mixed case database names might not be supported with pssql? No, it works fine, lthough PG folds to lower-case rather than upper-case (the standard). However, the rule-of-thumb is if you create the database/table with to preserve case then always access it with So: CREATE TABLE Foo -- Gets folded to lower-case SELECT * FROM Foo -- So does this, so it works SELECT * FROM FOO SELECT * FROM foo SELECT * FROM Foo -- Fails, because you've stopped case-folding CREATE TABLE Bar SELECT * FROM Bar SELECT * FROM Bar -- fails, because this gets folded to lower-case I have a feeling that the default character set is SQL-ASCII and should be changed to something else? What might that be and how can I change/update the character-set (encoding)? Well, you probably want a different character-set, but that will depend upon your locale and the character-set of the data you are storing. Nothing to do with this. HTH -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] SLEEP in posgresql
Jasbinder Singh Bali napsal(a): Instead of that, I re-engineered my while loop in the stored procedure as follows. ... I was wondering how to implement the SLEEP functionality here. Hello. I can't comment the function itself, but I want to bring something else to your attention. Note, that the stored procedure is always run as a single transaction and by doing the sleep in it, it will probbly run for a long time, or maybe even forever. The problem is that Long running transactions are evil(tm) Postgres, and almost any real database engine for that matter, has problems when there are tansactions that run for a very long time. It prevents the cleanup of stale records, because the engine has to keep them around for this long running transaction. You might consider doing the actual work in the transaction, but the sleeping in between shoud be done outside. Note to PG developers: Is there any thought being given to have the PL/pgSQL scripting language outside the function body? Like Ora has? It would be perfect for this case and I remember more than a dozen times in last year when I could have used it and saved some PHP work (and network communiaction). -- Michal Táborský chief systems architect Internet Mall, a.s. http://www.MALL.cz ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Upper and Lower-cased Database names?
Daniel, please try: select * from MyTest.public.cars; mixed cases need those , per SQL-Standard. In my experienced mixed cases in qualifiers which cross OS-barriers cause more trouble then use (... filenames with WebServers ...) Harald postgres=# select * from MyTest.public.cars; ERROR: cross-database references are not implemented: mytest.public.cars -- GHUM Harald Massa persuadere et programmare Harald Armin Massa Spielberger Straße 49 70435 Stuttgart 0173/9409607 fx 01212-5-13695179 - EuroPython 2008 will take place in Vilnius, Lithuania - Stay tuned!
Re: [GENERAL] SLEEP in posgresql
Jasbinder Singh Bali wrote: What if its just SLEEP for 1 second. Why would it keep my stored procedure hanging ? Because presumably your loop-condition isn't under your control (otherwise you wouldn't need to sleep). Can you *always* guarantee the condition (a=b) will happen within a reasonable time-frame? -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] SLEEP in posgresql
my loop is a busy wait and keeps iterating until a=b condition is met. However, it would lead to millions of instructions executing per second. So to save resources, I want to keep a sleep before re-iterating. Don't understand how is SLEEP disastrous here even if i don't know when is my loop going to end On 10/10/07, Richard Huxton [EMAIL PROTECTED] wrote: Jasbinder Singh Bali wrote: What if its just SLEEP for 1 second. Why would it keep my stored procedure hanging ? Because presumably your loop-condition isn't under your control (otherwise you wouldn't need to sleep). Can you *always* guarantee the condition (a=b) will happen within a reasonable time-frame? -- Richard Huxton Archonet Ltd
Re: [GENERAL] SLEEP in posgresql
Jasbinder Singh Bali wrote: my loop is a busy wait and keeps iterating until a=b condition is met. However, it would lead to millions of instructions executing per second. So to save resources, I want to keep a sleep before re-iterating. Don't understand how is SLEEP disastrous here even if i don't know when is my loop going to end What if it takes a week? That means you'll have a transaction open for a week blocking vacuum from reclaiming space. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] replacing single quotes
Ow Mun Heng wrote: Input is of form 'ppp','aaa','bbb' I want it to be stripped of quotes to become ppp,aaa,bbb I'm a little confused that you think that you will have to escape single quotes in the input. What is your use case? Normally the input is in some variable in some programming language. Escaping single quotes is only for string literals. The input is for an SRF which accepts an array.. eg: select * from foo(date1,date2,'{aaa,bbb,ccc}') where the function goes.. create function foo(timestamp, timestamp, foo_list text[]) returns setof ... Yes, but where does '{aaa,bbb,ccc}' come from? I assume that this string literal is only an example that you use to describe how the function is called. In reality you will have varying values for the foo_list function argument. So you'll store it in some kind of variable, right? In which programming language do you write? Yours, Laurenz Albe ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Upper and Lower-cased Database names?
2007/10/10, Daniel B. Thurman [EMAIL PROTECTED]: I am finding out for the first time that by having a database created with the name: MyTest, I cannot do a simple query as follows: postgres=# select * from MyTest.public.cars; ERROR: cross-database references are not implemented: mytest.public.cars Notice, however since I created a cars table in the postgres database, I was able to do a query: postgres=# select * from postgres.public.cars ; carid | name | vendor | type ---+--++-- H1| Civic| Honda | FF N1| Skyline GT-R | Nissan | 4WD T1| Supra| Toyota | FR T2| MR-2 | Toyota | FF (4 rows) So the problem, it seems that mixed case database names might not be supported with pssql? I have a feeling that the default character set is SQL-ASCII and should be changed to something else? What might that be and how can I change/update the character-set (encoding)? PostgreSQL doesn't support cross-database references, as per the error message, i.e. you can only perform queries on the current database. *However*, the syntax works when the named database is the same as one you're connected to. If you do \c MyTest mytest=# select * from MyTest.public.cars; the query will work (case is not the problem here). HTH Ian Barwick -- http://sql-info.de/index.html ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] DB upgrade
On Tue, 2007-10-09 at 11:46 +0200, Andrew Kelly wrote: Hi folks, please forgive what feels like a no-brainer even as I ask it, but... snip Just wanted to thank everybody who's provided feedback. I'm squared away now, and very appreciative of all the help. Andy ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] SLEEP in posgresql
I don't know how is a sleep of 1 second going to be harmful here instead of keeping a busy wait. Even if a=b is going to take 1 week as u say, isn't a sleep of 1 second, if nothing, would save some CPU resources instead of blindly looping for ever ? Aren't busy On 10/10/07, Richard Huxton [EMAIL PROTECTED] wrote: Jasbinder Singh Bali wrote: my loop is a busy wait and keeps iterating until a=b condition is met. However, it would lead to millions of instructions executing per second. So to save resources, I want to keep a sleep before re-iterating. Don't understand how is SLEEP disastrous here even if i don't know when is my loop going to end What if it takes a week? That means you'll have a transaction open for a week blocking vacuum from reclaiming space. -- Richard Huxton Archonet Ltd
[GENERAL] ORDER BY - problem with NULL values
Hi there, if I order a given year in DESCending ORDER, so that the highest values (of a given variable) for the countries are displayed at the top of the list, then actually the NULL values appear as first. Only below, I find the values ordered correctly. Is there any way to a) make the countries with NULL values appear at the bottom of the list b) neglect the NULL values by still allowing the countries to be displayed ? My SQL looks like this: SELECT SUM(CASE WHEN year=2002 THEN value ELSE NULL END) AS y_2002, SUM(CASE WHEN year=2001 THEN value ELSE NULL END) AS y_2001, SUM(CASE WHEN year=2000 THEN value ELSE NULL END) AS y_2000, c.name AS name FROM aquacult_prod_marine AS d LEFT JOIN countries AS c ON c.id = id_country GROUP BY name ORDER BY y_2000 DESC Thanks for any info... Stef Stefan Schwarzer Lean Back and Relax - Enjoy some Nature Photography: http://photoblog.la-famille-schwarzer.de Appetite for Global Data? UNEP GEO Data Portal: http://geodata.grid.unep.ch
Re: [GENERAL] SLEEP in posgresql
I don't know how is a sleep of 1 second going to be harmful here instead of keeping a busy wait. Even if a=b is going to take 1 week as u say, isn't a sleep of 1 second, if nothing, would save some CPU resources instead of blindly looping for ever ? Aren't busy waits dangerous from CPU resources point of view ? Won't it keep my CPU busy for ever. Also, if there's a sleep of 1 second, is it going to give me some saving in CPU resources or not ? On 10/10/07, Richard Huxton [EMAIL PROTECTED] wrote: Jasbinder Singh Bali wrote: my loop is a busy wait and keeps iterating until a=b condition is met. However, it would lead to millions of instructions executing per second. So to save resources, I want to keep a sleep before re-iterating. Don't understand how is SLEEP disastrous here even if i don't know when is my loop going to end What if it takes a week? That means you'll have a transaction open for a week blocking vacuum from reclaiming space. -- Richard Huxton Archonet Ltd
Re: [GENERAL] replacing single quotes
On Wed, 2007-10-10 at 10:46 +0200, Albe Laurenz wrote: Ow Mun Heng wrote: Input is of form 'ppp','aaa','bbb' I want it to be stripped of quotes to become ppp,aaa,bbb I'm a little confused that you think that you will have to escape single quotes in the input. What is your use case? Normally the input is in some variable in some programming language. Escaping single quotes is only for string literals. The input is for an SRF which accepts an array.. eg: select * from foo(date1,date2,'{aaa,bbb,ccc}') where the function goes.. create function foo(timestamp, timestamp, foo_list text[]) returns setof ... Yes, but where does '{aaa,bbb,ccc}' come from? I assume that this string literal is only an example that you use to describe how the function is called. It's an input from user. However, the input is of the form 'aaa','bbb','ccc' which needs to be stripped down to the form aaa,bbb,ccc In reality you will have varying values for the foo_list function argument. So you'll store it in some kind of variable, right? Yea.. and that variable is called foo_list. In which programming language do you write? plpgsql (This is the SRF) which is joined to another query which uses the where foo_list in ('aaa','bbb','ccc') syntax which is different from the Array Syntax. CREATE OR REPLACE FUNCTION foo(fromdate timestamp without time zone, todate timestamp without time zone, code text[]) RETURNS SETOF trh_amb AS $BODY$ DECLARE rec RECORD; BEGIN FOR rec IN SELECT ... ... FROM foo_table_a a INNER JOIN foo_table_b b ON a.a = b.a WHERE a.date_time BETWEEN fromdate AND todate AND a.foo_list = any (code) LOOP RETURN NEXT rec; END LOOP; RETURN; END; $BODY$ LANGUAGE 'plpgsql' IMMUTABLE STRICT; ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] How to speedup intarray aggregate function?
Thanks for your comment. I see two possible solution directions: 1. Is it possible to create C-function, which could accept something like ROWSET(ARRAY[]) in its input parameters? E.g. to call it as SELECT array_rowset_glue((SELECT arrayfield FROM arraytable)); or something like this? 2. Is it possible to implement in C something like this? array_buffer_init(); SELECT array_buffer_push(arrayfield) FROM arraytable; ids := array_buffer_get(); array_buffer_free(); where array_buffer_push() is an aggregate function which returns void, but, as its side-effect, appends arrayfield to the global array buffer for later acces with array_buffer_get(). On 10/10/07, Filip Rembiałkowski [EMAIL PROTECTED] wrote: 2007/10/10, Dmitry Koterov [EMAIL PROTECTED]: Hello. I created an aggregate: CREATE AGGREGATE intarray_aggregate_push (_int4) ( STYPE = _int4, SFUNC = intarray_push_array, INITCOND = '{}' ); (or - I may use _int_union instead of intarray_push_array, its speed is practically the same in my case). This aggregate merges together a list of integer[] arrays resulting one big array with all elements. Then I want to use this aggregate: SELECT intarray_aggregate_push(arrayfield) FROM arraytable The table arraytable contains a lot of rows (about 5000), each row has array with length of 5-10 elements, so - the resulting array should contain about 5 elements. The query is okay, but its speed is too bad: about 1 second. The main problem is the speed of intarray_aggregate_push function - it is quite slow, because intarray_push_array reallocates the memory each time I merge two arrays. I am pretty sure that the reallocaton and copying is the bottleneck, because if I use another dummy aggreate: CREATE AGGREGATE intarray_aggregate_dummy (_int4) ( STYPE = _int4, SFUNC = dummy, INITCOND = '{}' ); CREATE OR REPLACE FUNCTION public.dummy (a integer [], b integer []) RETURNS integer [] AS $body$ BEGIN RETURN a; END; $body$ LANGUAGE 'plpgsql' VOLATILE CALLED ON NULL INPUT SECURITY INVOKER; where dummy() is the function which returns its first argument without any modification, the speed grows dramatically - about 25 ms (instead of 1000 ms!). The question is: how could I optimize this, and is it possible at all in Postgres? I just want to get one large array glued from a lot of smaller arrays... 1. no wonder copying is the bottleneck - this is what the aggregate does, mostly. 2. you can use plain array_cat for this, in my test it is few percent faster 3. in this case I guess intarrray contrib is not an option, AFAIK it was created only for speeding up searches, that is int4[] lookups 4. to have this kind of optimization you talk about, we would need an aggregate operating (in this case appending) directly on internalstate. i'm not sure if this is possible in postgres 5. my results: your method (using intarray_push_array): 940 ms using array_cat: 860 ms same in PL/PgSQL: (LOOP, append) 800 ms same thing in Perl, no database (push array of arrays into one and print ): 18 ms cheers, Filip -- Filip Rembiałkowski ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] SLEEP in posgresql
Jasbinder Singh Bali wrote: I don't know how is a sleep of 1 second going to be harmful here instead of keeping a busy wait. Even if a=b is going to take 1 week as u say, isn't a sleep of 1 second, if nothing, would save some CPU resources instead of blindly looping for ever ? Aren't busy waits dangerous from CPU resources point of view ? Won't it keep my CPU busy for ever. Also, if there's a sleep of 1 second, is it going to give me some saving in CPU resources or not ? It's not the sleep that people are saying is harmful. It's the waiting in a loop in a transaction. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] ORDER BY - problem with NULL values
Stefan Schwarzer wrote: Hi there, if I order a given year in DESCending ORDER, so that the highest values (of a given variable) for the countries are displayed at the top of the list, then actually the NULL values appear as first. Only below, I find the values ordered correctly. Is there any way to a) make the countries with NULL values appear at the bottom of the list b) neglect the NULL values by still allowing the countries to be displayed Not sure what you mean by (b), but (a) is straightforward enough. = SELECT * FROM (SELECT 1 as a UNION SELECT 2 UNION SELECT null::int) AS foo ORDER BY (a is null), a DESC; a --- 2 1 (3 rows) = SELECT * FROM (SELECT 1 as a UNION SELECT 2 UNION SELECT null::int) AS foo ORDER BY (a is not null), a DESC; a --- 2 1 (3 rows) -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] How to speedup intarray aggregate function?
Wow, seems I've found that! * Beginning in PostgreSQL 8.1, the executor's AggState node is passed as * the fmgr context value in all transfunc and finalfunc calls. It is * not really intended that the transition functions will look into the * AggState node, but they can use code like *if (fcinfo-context IsA(fcinfo-context, AggState)) * to verify that they are being called by nodeAgg.c and not as ordinary * SQL functions. The main reason a transition function might want to know * that is that it can avoid palloc'ing a fixed-size pass-by-ref transition * value on every call: it can instead just scribble on and return its left * input. Ordinarily it is completely forbidden for functions to modify * pass-by-ref inputs, but in the aggregate case we know the left input is * either the initial transition value or a previous function result, and * in either case its value need not be preserved. See int8inc() for an * example.Notice that advance_transition_function() is coded to avoid a * data copy step when the previous transition value pointer is returned. So theoretically I may create intarray_aggregate_push() function which, when called by aggregate, does not reallocate copy memory each time it is called. Instead, it may allocate 1M memory at once (with gap), or enlarge the memory segment by factor of 2 when it need to reallocate it (it is O(log2) instead of O(N)). And here is an example from the source code: Datum int8inc(PG_FUNCTION_ARGS) { if (fcinfo-context IsA(fcinfo-context, AggState)) { /* * Special case to avoid palloc overhead for COUNT(): when called from * nodeAgg, we know that the argument is modifiable local storage, so * just update it in-place. * * Note: this assumes int8 is a pass-by-ref type; if we ever support * pass-by-val int8, this should be ifdef'd out when int8 is * pass-by-val. */ int64 *arg = (int64 *) PG_GETARG_POINTER(0); int64result; result = *arg + 1; /* Overflow check */ if (result 0 *arg 0) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg(bigint out of range))); *arg = result; PG_RETURN_POINTER(arg); } ... } On 10/10/07, Dmitry Koterov [EMAIL PROTECTED] wrote: Thanks for your comment. I see two possible solution directions: 1. Is it possible to create C-function, which could accept something like ROWSET(ARRAY[]) in its input parameters? E.g. to call it as SELECT array_rowset_glue((SELECT arrayfield FROM arraytable)); or something like this? 2. Is it possible to implement in C something like this? array_buffer_init(); SELECT array_buffer_push(arrayfield) FROM arraytable; ids := array_buffer_get(); array_buffer_free(); where array_buffer_push() is an aggregate function which returns void, but, as its side-effect, appends arrayfield to the global array buffer for later acces with array_buffer_get(). On 10/10/07, Filip Rembiałkowski [EMAIL PROTECTED] wrote: 2007/10/10, Dmitry Koterov [EMAIL PROTECTED]: Hello. I created an aggregate: CREATE AGGREGATE intarray_aggregate_push (_int4) ( STYPE = _int4, SFUNC = intarray_push_array, INITCOND = '{}' ); (or - I may use _int_union instead of intarray_push_array, its speed is practically the same in my case). This aggregate merges together a list of integer[] arrays resulting one big array with all elements. Then I want to use this aggregate: SELECT intarray_aggregate_push(arrayfield) FROM arraytable The table arraytable contains a lot of rows (about 5000), each row has array with length of 5-10 elements, so - the resulting array should contain about 5 elements. The query is okay, but its speed is too bad: about 1 second. The main problem is the speed of intarray_aggregate_push function - it is quite slow, because intarray_push_array reallocates the memory each time I merge two arrays. I am pretty sure that the reallocaton and copying is the bottleneck, because if I use another dummy aggreate: CREATE AGGREGATE intarray_aggregate_dummy (_int4) ( STYPE = _int4, SFUNC = dummy, INITCOND = '{}' ); CREATE OR REPLACE FUNCTION public.dummy (a integer [], b integer []) RETURNS integer [] AS $body$ BEGIN RETURN a; END; $body$ LANGUAGE 'plpgsql' VOLATILE CALLED ON NULL INPUT SECURITY INVOKER; where dummy() is the function which returns its first argument without any modification, the speed grows dramatically - about 25 ms (instead of 1000 ms!). The question is: how could I optimize this, and is it possible at all in Postgres? I just want to get one large array glued from a lot of
Re: [GENERAL] ORDER BY - problem with NULL values
Hi there, if I order a given year in DESCending ORDER, so that the highest values (of a given variable) for the countries are displayed at the top of the list, then actually the NULL values appear as first. Only below, I find the values ordered correctly. Is there any way to a) make the countries with NULL values appear at the bottom of the list b) neglect the NULL values by still allowing the countries to be displayed Not sure what you mean by (b), but (a) is straightforward enough. = SELECT * FROM (SELECT 1 as a UNION SELECT 2 UNION SELECT null::int) AS foo ORDER BY (a is null), a DESC; a --- 2 1 (3 rows) Looks easy. If I apply this to my SQL: SELECT SUM(CASE WHEN year=2002 THEN value ELSE NULL END) AS y_2002, SUM(CASE WHEN year=2001 THEN value ELSE NULL END) AS y_2001, SUM(CASE WHEN year=2000 THEN value ELSE NULL END) AS y_2000, c.name AS name FROM aquacult_prod_marine AS d LEFT JOIN countries AS c ON c.id = id_country GROUP BY name ORDER BY y_2000 DESC I would then say: ORDER BY (y_2000 is null), y_2000 DESC But then I get an Error warning: ERROR: column y_2000 does not exist What do I do wrong? ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] ORDER BY - problem with NULL values
Stefan Schwarzer wrote: Hi there, if I order a given year in DESCending ORDER, so that the highest values (of a given variable) for the countries are displayed at the top of the list, then actually the NULL values appear as first. Only below, I find the values ordered correctly. Is there any way to a) make the countries with NULL values appear at the bottom of the list b) neglect the NULL values by still allowing the countries to be displayed Not sure what you mean by (b), but (a) is straightforward enough. = SELECT * FROM (SELECT 1 as a UNION SELECT 2 UNION SELECT null::int) AS foo ORDER BY (a is null), a DESC; a --- 2 1 (3 rows) Looks easy. If I apply this to my SQL: SELECT SUM(CASE WHEN year=2002 THEN value ELSE NULL END) AS y_2002, SUM(CASE WHEN year=2001 THEN value ELSE NULL END) AS y_2001, SUM(CASE WHEN year=2000 THEN value ELSE NULL END) AS y_2000, c.name AS name FROM aquacult_prod_marine AS d LEFT JOIN countries AS c ON c.id = id_country GROUP BY name ORDER BY y_2000 DESC I would then say: ORDER BY (y_2000 is null), y_2000 DESC But then I get an Error warning: ERROR: column y_2000 does not exist What do I do wrong? Hmm... Nothing. The ORDER BY clause should get processed last, after column-aliasing (which labels your column y_2000). However, it seems like PG is evaluating the (X is null) clause earlier. = SELECT version,count(*) AS ct FROM items.wiki_items GROUP BY version ORDER BY (ct IS NULL); ERROR: column ct does not exist ^ = SELECT version,count(*) AS ct FROM items.wiki_items GROUP BY version ORDER BY (count(*) IS NULL); ...works... I can see why, but it's a pain. You've got two options: 1. Repeat the expression as I've done above ORDER BY (CASE (...) END IS NULL), y_2000 DESC 2. Wrap your query in another query so the column aliases are available: SELECT * FROM (your query) AS results ORDER BY y_2000 IS NULL, y_2000 DESC -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Generating subtotal reports direct from SQL
At 1:32 AM +0100 10/10/07, Gregory Stark wrote: Owen Hartnett [EMAIL PROTECTED] writes: Such that the final table has additional subtotal rows with the aggregate sum of the amounts. I'm thinking I can generate two tables and merge them, but is there an easier way using a fancy Select statement? Unfortunately the fancy SQL feature you're looking for is ROLLUP which Postgres doesn't support. I think you would have to do ( select id, code, amount from foo where code 10 union all select null, code, sum(amount) from foo where code 10 group by code ) order by code, id Perfect. Just what I was looking for. Thanks! -Owen ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] replacing single quotes
Ow Mun Heng wrote: Input is of form 'ppp','aaa','bbb' I want it to be stripped of quotes to become ppp,aaa,bbb The input is for an SRF which accepts an array.. where the function goes.. create function foo(timestamp, timestamp, foo_list text[]) returns setof ... I said: In reality you will have varying values for the foo_list function argument. So you'll store it in some kind of variable, right? In which programming language do you write? Answer: plpgsql Ok, we're coming closer. What I mean: In which programming language is the *call* to function foo()? Java? PHP? PL/pgSQL? Could you tell me the exact statement that you use to call foo()? Why do I ask this? An example: If you use embedded SQL to call the function, and the input is stored in the host variable inpstr, then the answer would be: EXEC SQL DECLARE c CURSOR FOR SELECT * FROM foo(localtimestamp, localtimestamp, string_to_array(replace(:inpstr, '', ''), ',')); Yours, Laurenz Albe ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Upper and Lower-cased Database names?
Daniel B. Thurman wrote: I am finding out for the first time that by having a database created with the name: MyTest, I cannot do a simple query as follows: postgres=# select * from MyTest.public.cars; ERROR: cross-database references are not implemented: mytest.public.cars Try: select * from MyTest.public.cars; Yours, Laurenz Albe ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] timer script from SAMS book or equivalent?
Could someone explain to me how the connection is initialized below? I'm re-writing the script in Ruby...but this is a stumbling block for me in the way the C++ libs work. Does the library initial conn automagically? How would one emulate this in another language...would it be to run it as the postgres user and connection to template1? Thanks, John int main( int argc, char * argv[] ) { // how is this being initialized? connection conn; work tran1( conn, getBegValues ); const result begTupleValues = getTupleValues( tran1, argc 2 ? argv[2] : 0 ); const result begBlockValues = getBlockValues( tran1, argc 2 ? argv[2] : 0 ); const result ignore = tran1.exec( argv[1] ); tran1.commit(); sleep( 1 ); work tran2( conn, getEndValues ); const result endTupleValues = getTupleValues( tran2, argc 2 ? argv[2] : 0 ); const result endBlockValues = getBlockValues( tran2, argc 2 ? argv[2] : 0 ); printDeltas( begTupleValues, endTupleValues, begBlockValues, endBlockValues ); } ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] disjoint union types
On Tue, Oct 09, 2007 at 10:30:15AM -0500, Erik Jones wrote: On Oct 9, 2007, at 9:38 AM, Sam Mason wrote: CREATE TABLE circle ( id SERIAL PRIMARY KEY, radius REAL NOT NULL ); CREATE TABLE square ( id SERIAL PRIMARY KEY, sidelen REAL NOT NULL ); CREATE TABLE shapes ( id SERIAL PRIMARY KEY, tag INTEGER NOT NULL, circleid INTEGER REFERENCES circle CHECK ((tag = 1) = (circleid IS NOT NULL)), squareid INTEGER REFERENCES square CHECK ((tag = 2) = (squareid IS NOT NULL)) ); You could use after triggers on your circle and shape tables to automatically make the insert into shapes for you. Yes, that helps a bit with getting data in. Doing anything generally useful with this data once it's there is still quite painful. If you've ever used a language supporting something like this natively then things get easier. In, say, Haskell you could do: data Shape = Circle Double | Square Double if I then wanted to get the area out I'd be able to do something like: area (Circle r) = pi * r ^ 2 area (Square l) = l ^ 2 mapping this over a list is easy. In SQL I'd need to do something much more complicated to get the areas of all these shapes out, maybe: SELECT s.id, CASE WHEN s.tag = 1 THEN pi() * r ^ 2 WHEN s.tag = 2 THEN l ^ 2 END AS area FROM shapes s LEFT JOIN circle c ON s.tag = 1 AND s.circleid = c.id LEFT JOIN square q ON s.tag = 2 AND s.squareid = q.id This is just a simple example, but if you've got a few of these structures to match up it starts to get complicated pretty quickly. Sam ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] ORDER BY - problem with NULL values
Richard Huxton escribió: Stefan Schwarzer wrote: Hi there, if I order a given year in DESCending ORDER, so that the highest values (of a given variable) for the countries are displayed at the top of the list, then actually the NULL values appear as first. Only below, I find the values ordered correctly. Is there any way to a) make the countries with NULL values appear at the bottom of the list b) neglect the NULL values by still allowing the countries to be displayed Not sure what you mean by (b), but (a) is straightforward enough. = SELECT * FROM (SELECT 1 as a UNION SELECT 2 UNION SELECT null::int) AS foo ORDER BY (a is null), a DESC; a --- 2 1 (3 rows) Looks easy. If I apply this to my SQL: SELECT SUM(CASE WHEN year=2002 THEN value ELSE NULL END) AS y_2002, SUM(CASE WHEN year=2001 THEN value ELSE NULL END) AS y_2001, SUM(CASE WHEN year=2000 THEN value ELSE NULL END) AS y_2000, c.name AS name FROM aquacult_prod_marine AS d LEFT JOIN countries AS c ON c.id = id_country GROUP BY name ORDER BY y_2000 DESC I would then say: ORDER BY (y_2000 is null), y_2000 DESC But then I get an Error warning: ERROR: column y_2000 does not exist What do I do wrong? Hmm... Nothing. The ORDER BY clause should get processed last, after column-aliasing (which labels your column y_2000). However, it seems like PG is evaluating the (X is null) clause earlier. = SELECT version,count(*) AS ct FROM items.wiki_items GROUP BY version ORDER BY (ct IS NULL); ERROR: column ct does not exist ^ = SELECT version,count(*) AS ct FROM items.wiki_items GROUP BY version ORDER BY (count(*) IS NULL); ...works... I can see why, but it's a pain. You've got two options: 1. Repeat the expression as I've done above ORDER BY (CASE (...) END IS NULL), y_2000 DESC 2. Wrap your query in another query so the column aliases are available: SELECT * FROM (your query) AS results ORDER BY y_2000 IS NULL, y_2000 DESC From 8.3 beta release notes: - ORDER BY ... NULLS FIRST/LAST I think this is what you want right? smime.p7s Description: S/MIME Cryptographic Signature
Re: [GENERAL] pgodbc + Excel + msquery + background refresh
On 10/10/07, Ow Mun Heng [EMAIL PROTECTED] wrote: Just wonder if anyone here uses Excel to connect to PG via ODBC. I'm using it extensively as my platform to get data from PG/MSSQL directly into excel. (Excel uses the msqry32.exe file which is like a stripped down sql query tool and returns data directly into excel) When using mssql, connecting from excel to mssql, I can get the query to run in the background. Hence, a long running query will not interfere with normal running of other excel works. Eg: Create new sheets, graphing etc. Basically, MS has programmed Excel to use a cursor when it connects to MSSQL. The generic PGODBC connection in excel doesn't know how to do this apparently. I'm not sure if the problem is in pgodbc or excel. I'd tend to guess excel. Microsoft might be willing to work on making Excel work better with PostgreSQL, but I wouldn't expect it to be a priority, as they sell a competing product and this just makes it look better. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] How to speedup intarray aggregate function?
I have written in C all needed contrib functions: intarray.bidx() (binary search in sorted list) and intagg.int_agg_append_state (bufferized appending of one array to another without linear memory reallocation). The speed now is great: in one case with intersection of 10 and 15000 arrays it become 30ms instead of 1600 ms (50 times faster). Few days later, after complex testing, I'll publish complete patches in pgsql-hackers maillist. On 10/10/07, Dmitry Koterov [EMAIL PROTECTED] wrote: Wow, seems I've found that! * Beginning in PostgreSQL 8.1, the executor's AggState node is passed as * the fmgr context value in all transfunc and finalfunc calls. It is * not really intended that the transition functions will look into the * AggState node, but they can use code like *if (fcinfo-context IsA(fcinfo-context, AggState)) * to verify that they are being called by nodeAgg.c and not as ordinary * SQL functions. The main reason a transition function might want to know * that is that it can avoid palloc'ing a fixed-size pass-by-ref transition * value on every call: it can instead just scribble on and return its left * input. Ordinarily it is completely forbidden for functions to modify * pass-by-ref inputs, but in the aggregate case we know the left input is * either the initial transition value or a previous function result, and * in either case its value need not be preserved. See int8inc() for an * example.Notice that advance_transition_function() is coded to avoid a * data copy step when the previous transition value pointer is returned. So theoretically I may create intarray_aggregate_push() function which, when called by aggregate, does not reallocate copy memory each time it is called. Instead, it may allocate 1M memory at once (with gap), or enlarge the memory segment by factor of 2 when it need to reallocate it (it is O(log2) instead of O(N)). And here is an example from the source code: Datum int8inc(PG_FUNCTION_ARGS) { if (fcinfo-context IsA(fcinfo-context, AggState)) { /* * Special case to avoid palloc overhead for COUNT(): when called from * nodeAgg, we know that the argument is modifiable local storage, so * just update it in-place. * * Note: this assumes int8 is a pass-by-ref type; if we ever support * pass-by-val int8, this should be ifdef'd out when int8 is * pass-by-val. */ int64 *arg = (int64 *) PG_GETARG_POINTER(0); int64result; result = *arg + 1; /* Overflow check */ if (result 0 *arg 0) ereport(ERROR, (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE), errmsg(bigint out of range))); *arg = result; PG_RETURN_POINTER(arg); } ... } On 10/10/07, Dmitry Koterov [EMAIL PROTECTED] wrote: Thanks for your comment. I see two possible solution directions: 1. Is it possible to create C-function, which could accept something like ROWSET(ARRAY[]) in its input parameters? E.g. to call it as SELECT array_rowset_glue((SELECT arrayfield FROM arraytable)); or something like this? 2. Is it possible to implement in C something like this? array_buffer_init(); SELECT array_buffer_push(arrayfield) FROM arraytable; ids := array_buffer_get(); array_buffer_free(); where array_buffer_push() is an aggregate function which returns void, but, as its side-effect, appends arrayfield to the global array buffer for later acces with array_buffer_get(). On 10/10/07, Filip Rembiałkowski [EMAIL PROTECTED] wrote: 2007/10/10, Dmitry Koterov [EMAIL PROTECTED]: Hello. I created an aggregate: CREATE AGGREGATE intarray_aggregate_push (_int4) ( STYPE = _int4, SFUNC = intarray_push_array, INITCOND = '{}' ); (or - I may use _int_union instead of intarray_push_array, its speed is practically the same in my case). This aggregate merges together a list of integer[] arrays resulting one big array with all elements. Then I want to use this aggregate: SELECT intarray_aggregate_push(arrayfield) FROM arraytable The table arraytable contains a lot of rows (about 5000), each row has array with length of 5-10 elements, so - the resulting array should contain about 5 elements. The query is okay, but its speed is too bad: about 1 second. The main problem is the speed of intarray_aggregate_push function - it is quite slow, because intarray_push_array reallocates the memory each time I merge two arrays. I am pretty sure that the reallocaton and copying is the bottleneck, because if I use another dummy aggreate: CREATE AGGREGATE
Re: [GENERAL] disjoint union types
On 10/10/07, Sam Mason [EMAIL PROTECTED] wrote: On Tue, Oct 09, 2007 at 10:30:15AM -0500, Erik Jones wrote: On Oct 9, 2007, at 9:38 AM, Sam Mason wrote: CREATE TABLE circle ( id SERIAL PRIMARY KEY, radius REAL NOT NULL ); CREATE TABLE square ( id SERIAL PRIMARY KEY, sidelen REAL NOT NULL ); CREATE TABLE shapes ( id SERIAL PRIMARY KEY, tag INTEGER NOT NULL, circleid INTEGER REFERENCES circle CHECK ((tag = 1) = (circleid IS NOT NULL)), squareid INTEGER REFERENCES square CHECK ((tag = 2) = (squareid IS NOT NULL)) ); You could use after triggers on your circle and shape tables to automatically make the insert into shapes for you. Yes, that helps a bit with getting data in. Doing anything generally useful with this data once it's there is still quite painful. If you've ever used a language supporting something like this natively then things get easier. In, say, Haskell you could do: data Shape = Circle Double | Square Double if I then wanted to get the area out I'd be able to do something like: area (Circle r) = pi * r ^ 2 area (Square l) = l ^ 2 mapping this over a list is easy. In SQL I'd need to do something much more complicated to get the areas of all these shapes out, maybe: SELECT s.id, CASE WHEN s.tag = 1 THEN pi() * r ^ 2 WHEN s.tag = 2 THEN l ^ 2 END AS area FROM shapes s LEFT JOIN circle c ON s.tag = 1 AND s.circleid = c.id LEFT JOIN square q ON s.tag = 2 AND s.squareid = q.id This is just a simple example, but if you've got a few of these structures to match up it starts to get complicated pretty quickly. Sam ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match I wonder if the best way to go would be to use the OO stuff. If you had a shapes table, that had the various operations you were interested in (say area), then you could have a circle table inherit from that, and automatically compute the area with a trigger, as Erik suggested. Then you could just query shapes: CREATE TABLE shapes ( shape_id serial PRIMARY KEY, area real not null ); CREATE TABLE circle ( radius real not null ) INHERITS (shapes); CREATE FUNCTION circle_area() RETURNS trigger AS $circle_area$ BEGIN NEW.area = pi() * NEW.radius ^ 2; RETURN NEW; END; $circle_area$ LANGUAGE plpgsql; CREATE TRIGGER circle_area BEFORE INSERT OR UPDATE ON circle FOR EACH ROW EXECUTE PROCEDURE circle_area(); INSERT INTO circle (radius) values (5) SELECT * FROM shapes shape_id 1 area 78.5398 Ian Barber
[GENERAL] Disable triggers per transaction 8.2.3
Hello list, I wonder if it is possible to disable triggers for a single transaction. I know I can disable triggers per table but then I need to disable all triggers in all recursive tables before doing by query. Can I do: BEGIN TRANSACTION; DISABLE TRIGGERS; DELETE FROM tbl_foo WHERE ID 5; ENABLE TRIGGERS; COMMIT; Or do I have to do: BEGIN TRANSACTION; ALTER TABLE tbl_foo DISABLE TRIGGERS ALL; ALTER TABLE tbl_foo_bar DISABLE TRIGGERS ALL; DELETE FROM tbl_foo WHERE ID 5; ALTER TABLE tbl_foo ENABLE TRIGGERS ALL; ALTER TABLE tbl_foo_bar ENABLE TRIGGERS ALL; COMMIT; Or is it even possible? I only want my triggers to be disabled for the transaction and not the global database. Thanks, Henke ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] disjoint union types
On Wed, Oct 10, 2007 at 05:02:36PM +0100, Ian Barber wrote: I wonder if the best way to go would be to use the OO stuff. I don't see how the following is object orientated, but I'm not sure it matters much. If you had a shapes table, that had the various operations you were interested in (say area) I prefer to keep everything as normalised as possible, if I start putting an area column somewhere then I'm forced to keep it updated somehow. The example I gave was rather simple, but I'd like to do lots of other things beside some (simple) calculations, e.g. matching tables up depending on the internal state of each object. , then you could have a circle table inherit from that, and automatically compute the area with a trigger, as Erik suggested. Then you could just query shapes: CREATE TABLE shapes ( shape_id serial PRIMARY KEY, area real not null ); CREATE TABLE circle ( radius real not null ) INHERITS (shapes); Postgres implements inheritance in a strange way (the way it is at the moment makes sense from an implementation, but not users', point of view), you can end up with a circle and square both with shape_id=1 if I don't take a lot of care. CREATE FUNCTION circle_area() RETURNS trigger AS $circle_area$ BEGIN NEW.area = pi() * NEW.radius ^ 2; RETURN NEW; END; $circle_area$ LANGUAGE plpgsql; CREATE TRIGGER circle_area BEFORE INSERT OR UPDATE ON circle FOR EACH ROW EXECUTE PROCEDURE circle_area(); INSERT INTO circle (radius) values (5) SELECT * FROM shapes shape_id 1 area 78.5398 This works to store the area of the shape, but doesn't allow me to work with work with more complicated structures. I'll try and think up a better example and send it along to the list when I can describe it. Thanks, Sam ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] disjoint union types
On Oct 10, 2007, at 11:42 AM, Sam Mason wrote: On Wed, Oct 10, 2007 at 05:02:36PM +0100, Ian Barber wrote: I wonder if the best way to go would be to use the OO stuff. I don't see how the following is object orientated, but I'm not sure it matters much. If you had a shapes table, that had the various operations you were interested in (say area) I prefer to keep everything as normalised as possible, if I start putting an area column somewhere then I'm forced to keep it updated somehow. The example I gave was rather simple, but I'd like to do lots of other things beside some (simple) calculations, e.g. matching tables up depending on the internal state of each object. , then you could have a circle table inherit from that, and automatically compute the area with a trigger, as Erik suggested. Then you could just query shapes: CREATE TABLE shapes ( shape_id serial PRIMARY KEY, area real not null ); CREATE TABLE circle ( radius real not null ) INHERITS (shapes); Postgres implements inheritance in a strange way (the way it is at the moment makes sense from an implementation, but not users', point of view), you can end up with a circle and square both with shape_id=1 if I don't take a lot of care. It doesn't take much care at all to avoid that: don't use SERIAL for the primary key of the parent. Instead use an explicity id integer NOT NULL DEFAULT nextval('some_seq'), that way all of the child tables will use the same sequence as the parent. That being said, I'm not convinced that table inheritance is what's needed here. I'll wait until you post the example you mention below before commenting further. CREATE FUNCTION circle_area() RETURNS trigger AS $circle_area$ BEGIN NEW.area = pi() * NEW.radius ^ 2; RETURN NEW; END; $circle_area$ LANGUAGE plpgsql; CREATE TRIGGER circle_area BEFORE INSERT OR UPDATE ON circle FOR EACH ROW EXECUTE PROCEDURE circle_area(); INSERT INTO circle (radius) values (5) SELECT * FROM shapes shape_id 1 area 78.5398 This works to store the area of the shape, but doesn't allow me to work with work with more complicated structures. I'll try and think up a better example and send it along to the list when I can describe it. Erik Jones Software Developer | Emma® [EMAIL PROTECTED] 800.595.4401 or 615.292.5888 615.292.0777 (fax) Emma helps organizations everywhere communicate market in style. Visit us online at http://www.myemma.com ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] ALL( ... ) and ANY ( ... ) didn't behave as expected
I expect that my intuition is incorrect about the use of ALL() and ANY(), but I found my result to be reverse from what I expected. Can anyone explain why ANY() behaves that way it does? Here are two test case examples that do what I expect: -- find all parent that only have boys SELECT * FROM Parents AS P WHERE 'boy' = ALL ( SELECT gender FROM Children AS C1 WHERE C1.parentid = P.parentid ); -- find all parent that only have girls SELECT * FROM Parents AS P WHERE 'girl' = ALL ( SELECT gender FROM Children AS C1 WHERE C1.parentid = P.parentid ); Here is the example that doesn't do what I expect: --find all parents that have a mixture of boys and girls. --but this doesn't return anything SELECT * FROM Parents AS P WHERE 'girl' ALL ( SELECT gender FROM Children AS C1 WHERE C1.parentid = P.parentid ) AND 'boy' ALL ( SELECT gender FROM Children AS C1 WHERE C1.parentid = P.parentid ); --This query gets what I want --but I believe that it shouldn't return anything SELECT * FROM Parents AS P WHERE 'girl' ANY ( SELECT gender FROM Children AS C1 WHERE C1.parentid = P.parentid ) AND 'boy' ANY ( SELECT gender FROM Children AS C1 WHERE C1.parentid = P.parentid ); Regards, Richard Broersma Jr. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] ALL( ... ) and ANY ( ... ) didn't behave as expected
Richard Broersma Jr wrote: Here is the example that doesn't do what I expect: --find all parents that have a mixture of boys and girls. --but this doesn't return anything SELECT * FROM Parents AS P WHERE 'girl' ALL ( SELECT gender FROM Children AS C1 WHERE C1.parentid = P.parentid ) AND 'boy' ALL ( SELECT gender FROM Children AS C1 WHERE C1.parentid = P.parentid ); I read this as: Find all parents such that none of their kids are girls and none of their kids are boys. That is, ALL of their genders are 'girl', AND ALL of their genders are 'boy'. Under the obvious assumptions about gender, the result is of course empty - except it's not clear to me what should happen for childless people ... --This query gets what I want --but I believe that it shouldn't return anything SELECT * FROM Parents AS P WHERE 'girl' ANY ( SELECT gender FROM Children AS C1 WHERE C1.parentid = P.parentid ) AND 'boy' ANY ( SELECT gender FROM Children AS C1 WHERE C1.parentid = P.parentid ); I read this as: Find all parents such that at least one of their kids is not a girl, and at least one of their kids is not a boy. Of course, this could also be written with =. - John Burger MITRE ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] ALL( ... ) and ANY ( ... ) didn't behave as expected
Richard Broersma Jr wrote: Here is the example that doesn't do what I expect: I'm guessing you're just stood too close to the problem. --find all parents that have a mixture of boys and girls. --but this doesn't return anything SELECT * FROM Parents AS P WHERE 'girl' ALL ( SELECT gender FROM Children AS C1 WHERE C1.parentid = P.parentid ) AND 'boy' ALL ( SELECT gender FROM Children AS C1 WHERE C1.parentid = P.parentid ); This translates to WHERE none of the children are girls AND none of the children are boys Assuming you have a two-state gender then that's nothing. For 'girl' ALL (...) then all the values you test must be not girls. --This query gets what I want --but I believe that it shouldn't return anything SELECT * FROM Parents AS P WHERE 'girl' ANY ( SELECT gender FROM Children AS C1 WHERE C1.parentid = P.parentid ) AND 'boy' ANY ( SELECT gender FROM Children AS C1 WHERE C1.parentid = P.parentid ); Translates to: WHERE any child is not a girl AND any child is not a boy So - at least one of each. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] ALL( ... ) and ANY ( ... ) didn't behave as expected
On Oct 10, 2007, at 12:38 PM, Richard Broersma Jr wrote: I expect that my intuition is incorrect about the use of ALL() and ANY(), but I found my result to be reverse from what I expected. Can anyone explain why ANY() behaves that way it does? Here are two test case examples that do what I expect: -- find all parent that only have boys SELECT * FROM Parents AS P WHERE 'boy' = ALL ( SELECT gender FROM Children AS C1 WHERE C1.parentid = P.parentid ); -- find all parent that only have girls SELECT * FROM Parents AS P WHERE 'girl' = ALL ( SELECT gender FROM Children AS C1 WHERE C1.parentid = P.parentid ); Here is the example that doesn't do what I expect: --find all parents that have a mixture of boys and girls. --but this doesn't return anything SELECT * FROM Parents AS P WHERE 'girl' ALL ( SELECT gender FROM Children AS C1 WHERE C1.parentid = P.parentid ) AND 'boy' ALL ( SELECT gender FROM Children AS C1 WHERE C1.parentid = P.parentid ); This says Give me all parents for which it is true that all of their children are not girls and all children are not boys which will only be true for parents with no children. Add a record to your Parents table without any corresponding Children record(s) and you'll get a result. --This query gets what I want --but I believe that it shouldn't return anything SELECT * FROM Parents AS P WHERE 'girl' ANY ( SELECT gender FROM Children AS C1 WHERE C1.parentid = P.parentid ) AND 'boy' ANY ( SELECT gender FROM Children AS C1 WHERE C1.parentid = P.parentid ); This is correct. It reads Give me all parents for whom any of their children is not a girl AND any of their children is not a boy. So, for a parent with both a boy and a girl, the boy is not a girl and the girl is not a boy. You could replace the ANY with a simple IN as then it would be Give me all parents who have both a boy and a girl. Erik Jones Software Developer | Emma® [EMAIL PROTECTED] 800.595.4401 or 615.292.5888 615.292.0777 (fax) Emma helps organizations everywhere communicate market in style. Visit us online at http://www.myemma.com ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] preferred way to use PG_GETARG_BYTEA_P in SPI
I'm curious what's considered the best way to invoke PG_GETARG_BYTEA_P in an SPI routine and properly check for null input in non 'strict' routines. Right now, I'm looking at PG_GETARG_POINTER to check for null value before using PG_GETARG_BYTEA_P to assign to the bytea pointer. ISTM a little but unintuitive to have PG_GETARG_BYTEA_P crash on null input...why not just assign null and skip the de-toast call in the macro? merlin ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] ALL( ... ) and ANY ( ... ) didn't behave as expected
Richard Broersma Jr [EMAIL PROTECTED] writes: Can anyone explain why ANY() behaves that way it does? Aside from the responses already given, I'm wondering whether you have any NULL entries in Children.gender. NULL rows within a NOT IN subselect tend to produce confusing results ... regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] ALL( ... ) and ANY ( ... ) didn't behave as expected
--- John D. Burger [EMAIL PROTECTED] wrote: I read this as: Find all parents such that none of their kids are girls and none of their kids are boys. That is, ALL of their genders are 'girl', AND ALL of their genders are 'boy'. Under the obvious assumptions about gender, the result is of course empty - except it's not clear to me what should happen for childless people ... Thanks everyone that makes sense! Regards, Richard Broersma Jr. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] time penalties on triggers?
On Oct 5, 2007, at 9:10 AM, Kenneth Downs wrote: I also found it very hard to pin down the penalty of the trigger, but came up with rough figures of 30-50% overhead. The complexity of the trigger did not matter. in which language did you write your triggers? ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Disable triggers per transaction 8.2.3
On 10/10/07, Henrik [EMAIL PROTECTED] wrote: Hello list, I wonder if it is possible to disable triggers for a single transaction. I know I can disable triggers per table but then I need to disable all triggers in all recursive tables before doing by query. Can I do: BEGIN TRANSACTION; DISABLE TRIGGERS; DELETE FROM tbl_foo WHERE ID 5; ENABLE TRIGGERS; COMMIT; Or do I have to do: BEGIN TRANSACTION; ALTER TABLE tbl_foo DISABLE TRIGGERS ALL; ALTER TABLE tbl_foo_bar DISABLE TRIGGERS ALL; DELETE FROM tbl_foo WHERE ID 5; ALTER TABLE tbl_foo ENABLE TRIGGERS ALL; ALTER TABLE tbl_foo_bar ENABLE TRIGGERS ALL; COMMIT; Or is it even possible? I only want my triggers to be disabled for the transaction and not the global database. 1. upgrade to 8.2.5 asap 2. disable triggers is possible, but alter acquires an excl lock on the table. so, while you are disabling for you txn only, nobody else does anything until you finish (is that what you want?) 3. there are other strategies to attack this problem for particular situations. merlin ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] timer script from SAMS book or equivalent?
My Ruby version is found below. Note it requires the postgres-pr ruby driver. Also note I didn't really ruby-ize it to much...for the most part it's a one-to-one translation. One different thing to note...this script expects you to have postgres-pr installed via rubygems. You can modify the require statements (actually just comment out the rubygems line) if you're is installed in a non-rubygems way. Also note, this requires you to define your connection parameters in env variables, which works in my situation. PG_TIMER_DB - name of the database PG_TIMER_USER - name of database user PG_TIMER_PASS - password of database user PG_TIMER_URI - connection uri that postgres-pr understands. Examples: tcp://localhost:5432 unix:/tmp/.s.PGSQL.5432 Hope it helps someone else. ### require 'rubygems' require 'postgres-pr/connection' $tupleQuery = SELECT relname, seq_scan, seq_tup_read, idx_scan, idx_tup_fetch FROM pg_stat_all_tables $blockQuery = SELECT relname, heap_blks_read, heap_blks_hit, idx_blks_read, idx_blks_hit FROM pg_statio_all_tables $use_jdbc = false def usage usstr = -EOL #{$0} query Example: #{$0} select * from users Note: the following environment variables MUST be set: PG_TIMER_DB - name of the database PG_TIMER_USER - name of database user PG_TIMER_PASS - password of database user PG_TIMER_URI - connection uri that postgres-pr understands. Examples: tcp://localhost:5432 unix:/tmp/.s.PGSQL.5432 EOL puts usstr exit() end $dbname = ENV['PG_TIMER_DB'] $dbuser = ENV['PG_TIMER_USER'] $dbpass = ENV['PG_TIMER_PASS'] $dburi = ENV['PG_TIMER_URI'] [$dbname, $dbuser, $dbpass, $dburi].each do |one| one.nil? usage() end class PostgresPR::Connection::Result def get_field_at_row(field, row) idx = @fields.collect{|f|f.name}.index field @rows[row][idx] end end class PureDBUtil def initialize() @conn = PostgresPR::Connection.new($dbname, $dbuser, $dbpass, $dburi) end def start_tran @conn.query(BEGIN) end def commit @conn.query(COMMIT) end def exec(query) @conn.query(query) end end class JDBCDBUtil def initialize(dbname=nil) raise Exception, not implemented end end def getTupleValues(tran, table_name=nil) if !table_name.nil? return tran.exec($tupleQuery + ORDER by relname) else return tran.exec($tupleQuery + WHERE relname = '#{table_name}' ) end end def getBlockValues(tran, table_name) if !table_name.nil? return tran.exec($blockQuery + ORDER by relname) else return tran.exec($blockQuery + WHERE relname = '#{table_name}' ) end end def getDelta(n, beginning, ending, col) endVal = 0 begVal = 0 endVal = ending.get_field_at_row(col, n) begVal = beginning.get_field_at_row(col, n) return endVal.to_f - begVal.to_f; end def getColumnWidth(res, col) max = 0 res.rows.each do |one| if one[col].size max max = one[col].size end end return max end def fill(len, c) c * len end def printDeltas(begTuples, endTuples, begBlocks, endBlocks) nameWidth = 15 str = str '+' fill( nameWidth, '-' ) +---+--+ \n; str '|' fill( nameWidth, ' ' ) | SEQUENTIAL I/O |INDEXED I/O | \n; str '|' fill( nameWidth, ' ' ) | scans | tuples |heap_blks |cached | scans | tuples | idx_blks |cached| \n; str '|' fill( nameWidth, '-' ) +---++---+--+---++--+--+ \n; totSeqScans = 0 totSeqTuples = 0 totHeapBlks = 0 totHeapHits = 0 totIdxScans = 0 totIdxTuples = 0 totIdxBlks = 0 totIdxHits = 0 tableCount = 0 0.upto begTuples.rows.size-1 do |row| seqScans = getDelta(row, begTuples, endTuples, seq_scan) seqTuples = getDelta(row, begTuples, endTuples, seq_tup_read) heapBlks = getDelta(row, begBlocks, endBlocks, heap_blks_read) heapHits = getDelta(row, begBlocks, endBlocks, heap_blks_hit) idxScans = getDelta(row, begTuples, endTuples, idx_scan) idxTuples = getDelta(row, begTuples, endTuples, idx_tup_fetch) idxBlks = getDelta(row, begBlocks, endBlocks, idx_blks_read) idxHits = getDelta(row, begBlocks, endBlocks, idx_blks_hit) if(( seqScans + seqTuples + heapBlks + heapHits + idxScans + idxTuples +
Re: [GENERAL] timer script from SAMS book or equivalent?
Sorry...the first version was a bit rash and left out some features...particularly filtering by table. Here's the patch to correct: If anyone wants the corrected version, email me off list. Thanks, John # diff -w pg_timer_old.rb pg_timer.rb 18a19 app = File.basename $0 20,21c21,25 #{$0} query Example: #{$0} select * from users --- #{app} query [tablename] or #{app} /path/to/file/containing/query.sql [tablename] Example: #{app} select * from users users 54a59 78c83 if !table_name.nil? --- if table_name.nil? 86c91 if !table_name.nil? --- if table_name.nil? 194c199,211 arg = args[0] --- first = args[0] query = nil if !first.nil? and File.exists?(first) File.open(first, r) do |f| query = f.read end else query = first end table = args[1] usage() if args.size 1 196d212 usage() if arg.nil? 201a218 204,206c221,223 begTupleValues = getTupleValues(tran1, arg) begBlockValues = getBlockValues(tran1, arg) ignore = tran1.exec(args[0]) --- begTupleValues = getTupleValues(tran1, table) begBlockValues = getBlockValues(tran1, table) ignore = tran1.exec(query) 212,213c229,232 endTupleValues = getTupleValues(tran2, arg) endBlockValues = getBlockValues(tran2, arg) --- tran2.start_tran() endTupleValues = getTupleValues(tran2, table) endBlockValues = getBlockValues(tran2, table) tran2.commit() ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] PostgreSQL Conference Fall 2007, Registration closing soon!
PostgreSQL Conference Fall 2007 is a 1-day conference happening on October 20, 2007 on the beautiful Portland State University campus. PSU's Computer Science Graduate Student Council is graciously hosting an excellent group of speakers made up of prominent PostgreSQL consultants, developers and advocates. We've had incredible interest in our conference, and will be closing registration soon. If you haven't registered yet, do it now! http://www.postgresqlconference.org Registration is free for students, although we are asking for a $10 donation through the website for the after-party. For all others, the fee is $60 for the conference and dinner. All proceeds go directly to Software in the Public Interest, a 501(c)3 non-profit, and will be used for PostgreSQL advocacy. For detailed descriptions of the speakers and talks, please visit: http://www.postgresqlconference.org/talks/ And here's the schedule: * 8:00 - 8:45 - Coffee / Breakfast / Social (provided at the conference) * 8:45 - 9:00 - Joshua Drake - A word from our sponsors * 9:00 - 9:25 - Josh Berkus - Welcome to 8.3 * 9:25 - 10:20 - David Wheeler - Ruby on Rails Essentials for PostgreSQL Enthusiasts -- 10 minute break -- * 10:30 - 11:20 - Robert Hodges - Scaling PostgreSQL Performance with uni/cluster * 11:20 - 12:10 - Neil Conway - Understanding Query Execution in PostgreSQL * 12:10 - 13:15 - LUNCH (on your own) * 13:15 - 13:45 - Mark Wong - PostgreSQL Performance * 13:45 - 14:15 - Joshua Drake - PL/Proxy and Horizontal Scaling * 14:15 - 15:05 - Web Sprague - PostGIS (geographic database) -- 10 minute break -- * 15:15 - 16:05 - David Fetter - Babel of procedural languages * 16:05 - 17:00 - Robert Treat - PostgreSQL Partitioning, semantics, pitfalls and implementation * 17:00 - 17:25 - Josh Berkus - Stupid Solaris tricks * 17:25 - 17:30 - Closing Remarks, Thanks, Where's the party? * 17:30 - 18:00 - Travel to dinner/party * 18:00 -- Dinner/Party at the Paramount Hotel till they kick us out (provided by conference) And once again, thanks to our sponsors: Command Prompt: http://www.commandprompt.com/ Continuent: http://www.continuent.com/ EnterpriseDB: http://www.enterprisedb.com/ Greenplum : http://www.greenplum.com/ Hyperic: http://www.hyperic.com/ OmniTI: http://www.omniti.com/ OTG: http://www.otg-nc.com/ Sun: http://www.sun.com/ Truviso: http://www.truviso.com/ Other Sponsors: PDXPUG: http://pugs.postgresql.org/pdx PSU: http://www.pdx.edu -- Selena Deckelmann PDXPUG - Portland PostgreSQL Users Group http://pugs.postgresql.org/pdx http://www.postgresqlconference.org ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Default Ubuntu post-install Qs (PG v7.4)
1) Am I right when I assume an init.d type of start/kill scenario is used? 2) The ps -ef shows the process: /usr/lib/postgresql/7.4/postmaster -D /var/lib/postgresql/7.4/main postgres: stats buffer process postgres: stats collector process I see no logfile explicitly defined. Where is that specified? 3) In /var/log/postgresql I am getting logging, but not enough. How can I get more details? 4) User postgres keeps getting the following showing up: [EMAIL PROTECTED]:/tmp$ 2007-10-08 20:02:01 LOG: incomplete startup packet 2007-10-08 20:07:29 LOG: recycled transaction log file 0001007D 2007-10-09 00:02:02 LOG: incomplete startup packet 2007-10-09 05:02:01 LOG: incomplete startup packet 2007-10-09 10:02:01 LOG: incomplete startup packet Thank you! Ralph Smith [EMAIL PROTECTED] =
Re: [GENERAL] preferred way to use PG_GETARG_BYTEA_P in SPI
Merlin Moncure [EMAIL PROTECTED] writes: I'm curious what's considered the best way to invoke PG_GETARG_BYTEA_P in an SPI routine and properly check for null input in non 'strict' routines. Right now, I'm looking at PG_GETARG_POINTER to check for null value before using PG_GETARG_BYTEA_P to assign to the bytea pointer. That would be entirely wrong. In a non-strict function, test PG_ARGISNULL(n) before attempting any variant of PG_GETARG(n). Grepping for PG_ARGISNULL will yield lots of examples. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Coercing compound types to use generic ROW comparison operators
I am storing a rowtype that keeps the primary key column(s) of another table. E.g., create table point (x int, y int, stuff text, primary key (x, y)); then, think: create type point_pk as (x int, y int). When I go to compare point_pks against one another I get errors about missing comparison operators. HOWEVER, I can do this no problem: select row(1,2)=row(2,3); I would REALLY like to be able to use the generic row comparison functions, which, as detailed in the manual, are equivalent to comparing elements left-to-right. Is there a way I can convince my custom composite data type (point_pk) to use the row-wise comparison functions, so that I don't have to hackishly rewrite the comparison algorithm for each composite type? Using 8.1.5. Thanks, Randall -- Randall Lucas Tercent, Inc. DF93EAD1 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Default Ubuntu post-install Qs (PG v7.4)
On 10/11/07, Ralph Smith [EMAIL PROTECTED] wrote: 1) Am I right when I assume an init.d type of start/kill scenario is used? Aye 2) The ps -ef shows the process: /usr/lib/postgresql/7.4/postmaster -D /var/lib/postgresql/7.4/main postgres: stats buffer process postgres: stats collector process I see no logfile explicitly defined. Where is that specified? Have a look in /etc/postgresql/7.4/main/postgresql.conf 3) In /var/log/postgresql I am getting logging, but not enough. How can I get more details? See above. Specifically postgresql.conf Thank you! Ralph Smith Cheers, Andrej -- Please don't top post, and don't use HTML e-Mail :} Make your quotes concise. http://www.american.edu/econ/notes/htmlmail.htm ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] pgodbc + Excel + msquery + background refresh
On Wed, 2007-10-10 at 10:22 -0500, Scott Marlowe wrote: On 10/10/07, Ow Mun Heng [EMAIL PROTECTED] wrote: Just wonder if anyone here uses Excel to connect to PG via ODBC. I'm using it extensively as my platform to get data from PG/MSSQL directly into excel. (Excel uses the msqry32.exe file which is like a stripped down sql query tool and returns data directly into excel) When using mssql, connecting from excel to mssql, I can get the query to run in the background. Hence, a long running query will not interfere with normal running of other excel works. Eg: Create new sheets, graphing etc. Basically, MS has programmed Excel to use a cursor when it connects to MSSQL. The generic PGODBC connection in excel doesn't know how to do this apparently. I don't think this is true.. This was logged in the PG instance. LOG: duration: 73510.302 ms statement: BEGIN;declare SQL_CUR00FC9B68 cursor with hold for Select * from foo('8/1/2007','10/1/2007','{AAA,BBB,CCC,DDD,EEE,FFF,GGZ}');fetch 100 in SQL_CUR00FC9B68 Additionally, the config for psqlodbc has this Use Declare/Fetch cursors which is set to 100 by default. I'm not sure if the problem is in pgodbc or excel. I'd tend to guess excel. Microsoft might be willing to work on making Excel work better with PostgreSQL, but I wouldn't expect it to be a priority, as they sell a competing product and this just makes it look better. Hear.. hear.. but, this is where I would expect that headway be made since Excel is (fortunately/unfortunately, take your pick) the main BI app that is ubiquitious enough to make a dent. http://andyonenterprisesoftware.com/2007/07/the-price-of-failure/ See point #3 ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] pgodbc + Excel + msquery + background refresh
On Wed, 2007-10-10 at 12:01 -0400, Gauthier, Dave wrote: -Original Message- From: [EMAIL PROTECTED] [mailto:pgsql-general-[EMAIL PROTECTED] On Behalf Of Scott Marlowe Sent: Wednesday, October 10, 2007 11:23 AM To: Ow Mun Heng Cc: pgsql-general@postgresql.org Subject: Re: [GENERAL] pgodbc + Excel + msquery + background refresh On 10/10/07, Ow Mun Heng [EMAIL PROTECTED] wrote: Just wonder if anyone here uses Excel to connect to PG via ODBC. I'm using it extensively as my platform to get data from PG/MSSQL directly into excel. (Excel uses the msqry32.exe file which is like a stripped down sql query tool and returns data directly into excel) When using mssql, connecting from excel to mssql, I can get the query to run in the background. Hence, a long running query will not interfere I have. Wasn't that tough. I used the ANSI ODBC Driver (psqlODBC). In Excel, I embed a macro that runs a query against the DB. I let the New Database Query (via Data - Import External Data) tool figure out the connection string. And I also use an extension called QueryEditor (I'll try to attach) to code the query. The stuff MS provides with regard to query development is pretty crude and limited. QueryEditor is much more accommodating. So you basically let New Database Query make the connection to the DB (select anything from any table doesn't matter), then edit the query (change it altogether) using QueryEditor. I use it to make plpgsl function calls. Thanks for the code..It came through. But unfortunately, it suffered from the same fate. It's not asyncrhnous. Meaning, it hangs up excel during the query. This is an issue because , well the long and short of it is that it Hangs up Excel. Hence it's not a solution at all. I've sent this same email to the odbc list, but seems like either it's very low activity or, not many people uses odbc to connect to PG (or via excel for that matter) ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] 8.2.4 selects make applications wait indefinitely
Hi all, We are facing some problems after the migration of our PostgreSQL 8.0 to the 8.2.4 version. The entire box runs under SUSE 10.3. bd_sgp=# select version(); version PostgreSQL 8.2.4 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.2.1 (SUSE Linux) The problem occurs when some SELECTs does not return any row and the application waits indefinitely. One of the SELECTs that locks is the SELECT * FROM tb_produtos where codigo=5002; although the query SELECT codigo, descricao, embalagem, grupo, marca, unidade, grupo_cliente, codmarca, ativo, kg, codigo_deposito FROM tb_produtos where codigo=5002 runs fine. In summary, if you name all the table columns instead of using the * the query runs fine, otherwise it locks. I've queried the pg_locks and no locks are there when the application was waiting. pg_stat_activity reports that the SELECT was accepted by the database because the column query_start is updated although the pg_log (log_statement(all)) does not report it. If the where clause is changed from codigo=5002 to codigo=3334 in the SELECT * statement, it runs fine. The problem only occurs if we use remote clients, if the SELECT * from tb_produtos where codigo=5002 is processed by a local(server) psql utility it runs fine too. When we try to run the query in a remote client using the windows psql it locks. The pg_stat_activity's current_query column reports idle. We also tried ODBC clients and they lock too. I've defined another table using the LIKE CREATE option and inserted all the 85 lines of tb_produtos into the new one and tried the SELECT * FROM tb_produtostest where codigo=5002 against it. The query locks too. Summary: Local SELECT * FROM tb_produtos where codigo=5002 Runs Remote SELECT * FROM tb_produtos where codigo=5002 locks Remote SELECT * from tb_produtos where codigo=3334 runs Remote SELECT list of all columns FROM tb_produtos where codigo=5002 runs I´ve noticed one strange local psql behaviour when we try to see the table definition of the tb_produtos table using the \d command. The column named codigo_deposito is returned as ndices:deposito. Apparently is a psql issue because if we query the pg_attribute the column name appears correctly as codigo_deposito. I'm thinking to install the 8.2.5 to fix this issue. Am I thinking right? Would appreciate any other suggestions. Thank you very much in advance. Reimer
Re: [GENERAL] 8.2.4 selects make applications wait indefinitely
Carlos H. Reimer [EMAIL PROTECTED] writes: ... if you name all the table columns instead of using the * the query runs fine, otherwise it locks. [ blink... ] You really, really, really need to provide a reproducible test case to prove that claim. The problem only occurs if we use remote clients, if the SELECT * from tb_produtos where codigo=5002 is processed by a local(server) psql utility it runs fine too. When we try to run the query in a remote client using the windows psql it locks. That sounds like your unspecified remote client has got some issues, but you've not provided any details that would let anyone else figure it out. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] 8.2.4 selects make applications wait indefinitely
On Oct 10, 2007, at 10:09 PM, Carlos H. Reimer wrote: Hi all, We are facing some problems after the migration of our PostgreSQL 8.0 to the 8.2.4 version. The entire box runs under SUSE 10.3. bd_sgp=# select version(); version -- -- PostgreSQL 8.2.4 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.2.1 (SUSE Linux) The problem occurs when some SELECTs does not return any row and the application waits indefinitely. One of the SELECTs that locks is the SELECT * FROM tb_produtos where codigo=5002; although the query SELECT codigo, descricao, embalagem, grupo, marca, unidade, grupo_cliente, codmarca, ativo, kg, codigo_deposito FROM tb_produtos where codigo=5002 runs fine. In summary, if you name all the table columns instead of using the * the query runs fine, otherwise it locks. I've queried the pg_locks and no locks are there when the application was waiting. pg_stat_activity reports that the SELECT was accepted by the database because the column query_start is updated although the pg_log (log_statement(all)) does not report it. If the where clause is changed from codigo=5002 to codigo=3334 in the SELECT * statement, it runs fine. The problem only occurs if we use remote clients, if the SELECT * from tb_produtos where codigo=5002 is processed by a local(server) psql utility it runs fine too. When we try to run the query in a remote client using the windows psql it locks. The pg_stat_activity's current_query column reports idle. We also tried ODBC clients and they lock too. I've defined another table using the LIKE CREATE option and inserted all the 85 lines of tb_produtos into the new one and tried the SELECT * FROM tb_produtostest where codigo=5002 against it. The query locks too. Summary: Local SELECT * FROM tb_produtos where codigo=5002 Runs Remote SELECT * FROM tb_produtos where codigo=5002 locks Remote SELECT * from tb_produtos where codigo=3334 runs Remote SELECT list of all columns FROM tb_produtos where codigo=5002 runs I´ve noticed one strange local psql behaviour when we try to see the table definition of the tb_produtos table using the \d command. The column named codigo_deposito is returned as ndices:deposito. Apparently is a psql issue because if we query the pg_attribute the column name appears correctly as codigo_deposito. I'm thinking to install the 8.2.5 to fix this issue. Am I thinking right? Would appreciate any other suggestions. Thank you very much in advance. Reimer Are all of these remote connections from the same machine? Did you upgrade your client postgres libraries on your remote machine(s) as well? Erik Jones Software Developer | Emma® [EMAIL PROTECTED] 800.595.4401 or 615.292.5888 615.292.0777 (fax) Emma helps organizations everywhere communicate market in style. Visit us online at http://www.myemma.com ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL]silent install
hi all, msiexec /i D:\usr\local\postgre\postgresql-8.2-int.msi /qb INTERNALLAUNCH=1 ADDLOCAL=server,psql,docs SERVICEDOMAIN=%COMPUTERNAME% CREATESERICEUSER=1 BASEDIR=d:\usr\local\postgres SERVICEACCOUNT=postgres SERVICEPASSWORD=postgres SUPERPASSWORD=11 PERMITREMOTE=1 i try to the silent installing ,but i always failed: ..this may indicate a problem with this package.The error code is 2711. is there any solution ?
[GENERAL] PLPGSQL 'SET SESSION ROLE' problems ..
Hi, Is it possible to change the current role to another arbitrary role using a PLPGSQL function? In essence I've a function authorise created by the db superuser with 'SECURITY DEFINER' specificied. However from within plpgsql the 'superuser' attribute isn't honored when trying to change roles (ie: the non 'superuser' rules for role change must be honoured). Postgresql version 8.2.3 Is this a bug? .. If not, how do I achieve an 'adhoc' change of role from within the rules system? tnx, -greg -- Dr. Greg Wickham Program Manager, e-Research Phone: +61 2 6222 3552 AARNet Mob: +61 4 0785 4566 CSIRO, Building 9, Banks St Email: [EMAIL PROTECTED] Yarralumla ACT 2600 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] PLPGSQL 'SET SESSION ROLE' problems ..
Greg Wickham [EMAIL PROTECTED] writes: Is it possible to change the current role to another arbitrary role using a PLPGSQL function? Yes. However from within plpgsql the 'superuser' attribute isn't honored when trying to change roles IIRC we deliberately restrict inheritance of superuser status. If you want to argue there's a bug here you need to provide specific details of what you did, as well as an argument why superuser status should be more laxly controlled. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
[GENERAL] Not able to insert binary Data having NULL
Hi, I have bytea column in one table and using C language, I am trying to insert a binary data (having NULLs) into bytea column, but it just inserts/updates till NULL reached. Please let know whether I am missing something. Any other approach - may I use some data type instead of bytea so that any binary data can go there? Best Regards, Manish Jain