[HACKERS] float8 regression failure (HEAD, cygwin)
Hello, While setting up a buildfarm installation for cygwin, I've uncountered the following regression failure : float8 ... FAILED == pgsql.3132/src/test/regress/regression.diffs *** ./expected/float8-small-is-zero.out Tue Jul 18 09:24:52 2006 --- ./results/float8.outTue Jul 18 09:53:42 2006 *** *** 13,29 SELECT '-10e400'::float8; ERROR: -10e400 is out of range for type double precision SELECT '10e-400'::float8; ! float8 ! ! 0 ! (1 row) ! SELECT '-10e-400'::float8; ! float8 ! ! -0 ! (1 row) ! -- bad input INSERT INTO FLOAT8_TBL(f1) VALUES (''); ERROR: invalid input syntax for type double precision: --- 13,21 SELECT '-10e400'::float8; ERROR: -10e400 is out of range for type double precision SELECT '10e-400'::float8; ! ERROR: 10e-400 is out of range for type double precision SELECT '-10e-400'::float8; ! ERROR: -10e-400 is out of range for type double precision -- bad input INSERT INTO FLOAT8_TBL(f1) VALUES (''); ERROR: invalid input syntax for type double precision: *** *** 377,383 --- 369,377 INSERT INTO FLOAT8_TBL(f1) VALUES ('-10e400'); ERROR: -10e400 is out of range for type double precision INSERT INTO FLOAT8_TBL(f1) VALUES ('10e-400'); + ERROR: 10e-400 is out of range for type double precision INSERT INTO FLOAT8_TBL(f1) VALUES ('-10e-400'); + ERROR: -10e-400 is out of range for type double precision -- maintain external table consistency across platforms -- delete all values and reinsert well-behaved ones DELETE FROM FLOAT8_TBL; = This happening on cygwin 1.5.20 (running on top of winXP), gcc 3.4.4. The entire check.log can be found here : http://www.newsoftcontrol.ro/~am/pgfarm/check.log The other logs generated by the buildfarm can be found here: http://www.newsoftcontrol.ro/~am/pgfarm/ Cheers, Adrian Maier ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] plPHP and plRuby
Marc G. Fournier wrote: Actually it would be nice to have the not-included PLs present in src/pl/ as their own directories with a README.TXT containing fetch and build instructions So we would have src/pl/plphp/README.TXT src/pl/pljava/README.TXT src/pl/plj/README.TXT and anybody looking for pl-s would find the info in a logical place *That* idea I like ... ISTM that a clear strategy for how to deal with core, contrib, add-ons, etc. is long overdue and that's the reason why these discussions pop up over and over again. The question What are the criterion's for core inclusion? has not yet been answered. I though PL/Java fulfilled those criterion's but a new threshold for the #lines of code and a concern for code in unmaintainable language made it impossible. The result of an unclear strategy can be perceived as somewhat unjust. There seem to be a very unanimous consensus that PL/pgsql belongs in core. Large object support, free text search and some others also receive support by everyone. These add-ons clearly belong where they are. The historical reasons to continuously include others are, IMHO, not so obvious and the result undoubtedly creates first- and second class citizens in the module flora. The split doesn't correlate very well with feature richness or popularity. I have a suggestion that might help clearing things up a bit :-) A couple of specialized teams need to be established (or rather, formalized since they already exists to some extent) that can be thought of as core subsidiary's. The idea is that such a team would take on the maintenance of one specialized area of PostgreSQL. Java, for instance, is such an area. PostgreSQL has a huge number of Java users. They all use the JDBC driver and a few use PL/Java. There's been talk about Eclipse tool support and some will have an interest in XA-compliance in order to gain JTA support, etc. Today, it's scattered all over the place. Other subsidiary teams should be formed around odbc (or .net perhaps), php, ruby, replication/clustering, etc. to take control over those areas. A very important part of my suggestion is that for the normal user, it would appear that what a core subsidiary team contribute really is *part of* the database proper and not something maintained by a third-party contributor or commercial vendor. The team would maintain their own website (although all layout would be centralized), source code control system, mailing list etc. but they would share a lot more of the PostgreSQL infrastructure then what is shared today. Important things would be: - Documentation. Inclusion of a subsidiary module should mean that some chapters are added (automatically) to the user manual. - Build farm support. - Packaging and downloads - Server infrastructure - Seamless navigation from the PostgreSQL main web-site. PgFoundry would live on like today, targeted on third-party modules and serving as an incubator for modules that aim to be included in core or into one of its subsidiaries. Kind Regards, Thomas Hallgren ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] automatic system info tool?
On 7/18/06, Bort, Paul [EMAIL PROTECTED] wrote: Mind you, maybe perl provides emulation for uname? Not that I know of. Wouldn't $^0 and $Config{archname} cover quite a few, though? ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[HACKERS] missing tuplestore_gettuple
Hi, In 8.2 the function tuplestore_gettuple in tuplestore.h is either missing or deprecated? can someone shed some light on this? (it appears in the tuplestore.c file and there is no deprecation warning there) 10x. -- Regards, Tzahi. -- Tzahi Fadida Blog: http://tzahi.blogsite.org | Home Site: http://tzahi.webhop.info WARNING TO SPAMMERS: see at http://members.lycos.co.uk/my2nis/spamwarning.html ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] automatic system info tool?
If you can open a command shell you can get the OS version with the 'ver' command under Windows: C:\ver Microsoft Windows XP [Version 5.1.2600] How do you do this from a program though. Under UNIX uname() is a function call as well as a program. It returns the os name, version, hostname and system type. GetVersionEx() will get you the windows version, service pack, etc IIRC. in perl: use POSIX; print join(',',POSIX::uname()),\n; prints: Windows NT,hostname.domain.com,5.0,Build 2195 (Service Pack 4),x86 Works on all Platforms. (more detail on Win with: use Win32; join(' ', Win32::GetOSVersion()), \n;) Andreas ---(end of broadcast)--- TIP 6: explain analyze is your friend
CSStorm occurred again by postgreSQL8.2. (Re: [HACKERS] poor performance with Context Switch Storm at TPC-W.)
Katsuhiko Okano wrote: By PostgreSQL8.2, NUM_SUBTRANS_BUFFERS was changed into 128 and recompile and measured again. NOT occurrence of CSStorm. The value of WIPS was about 400. measured again. not occurrence when measured for 30 minutes. but occurrence when measured for 3 hours, and 1 hour and 10 minutes passed. It does not solve, even if it increases the number of NUM_SUBTRANS_BUFFERS. The problem was only postponed. If the number of SLRU buffers is too low, also in PostgreSQL8.1.4, if the number of buffers is increased I think that the same result is brought. (Although the buffer of CLOG or a multi-transaction also increases, I think that effect is small) Now, NUM_SLRU_BUFFERS is changed into 128 in PostgreSQL8.1.4 and is under measurement. Occurrence CSStorm when the version 8.1.4 passed similarly for 1 hour and 10 minutes. A strange point, The number of times of a LWLock lock for LRU buffers is 0 times until CSStorm occurs. After CSStorm occurs, the share lock and the exclusion lock are required and most locks are kept waiting. (exclusion lock for SubtransControlLock is increased rapidly after CSStorm start.) Is different processing done by whether CSStrom has occurred or not occurred? regards, Katsuhiko Okano okano katsuhiko _at_ oss ntt co jp ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[HACKERS] On-disk bitmap index patch
Hi, I have posted a patch to the CVS head for on-disk bitmap index to pgsql-patches. If this can get in 8.2, that would be great. Any comments and suggestions are welcome. I still need to add several items: (1) README file in src/backend/access/bitmap. (2) Bitmap index documentation. (3) Hiding the internal btree. Also, I have disabled the multi-column index support because there is a known problem. Assume that there is a bitmap index on a and b. When a query predicate has only a, the current code may generate a wrong result. That's because the current code assumes that b is null. The ultimate problem is because the search code only handles one bitmap vector now. I need a fix to support manipulating multiple bitmap vectors. If you find any other problems, please let me know. Thanks, Jie ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] plPHP and plRuby
On Mon, Jul 17, 2006 at 07:37:41PM -0300, Marc G. Fournier wrote: Actually it would be nice to have the not-included PLs present in src/pl/ as their own directories with a README.TXT containing fetch and build instructions So we would have src/pl/plphp/README.TXT src/pl/pljava/README.TXT src/pl/plj/README.TXT and anybody looking for pl-s would find the info in a logical place *That* idea I like ... You could take the idea even further and place Makefiles or scripts there to download to source and set it up for compiling. make cvs or some such. Then on a stable release the script gets updated with the appropriate CVS tag and you're done. Have a nice day, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. signature.asc Description: Digital signature
Re: [HACKERS] plPHP and plRuby
On 17-Jul-06, at 6:37 PM, Marc G. Fournier wrote: On Tue, 18 Jul 2006, Hannu Krosing wrote: Ühel kenal päeval, E, 2006-07-17 kell 22:01, kirjutas Martijn van Oosterhout: On Mon, Jul 17, 2006 at 12:18:46PM -0400, Andrew Dunstan wrote: Well, I am not making any promises right now about when buildfarm will support external modules. I've been playing with the idea of having a subdirectory named extras with descriptor files describing how to fetch a project and compile it. I got the fetching and the unpacking going, but the building isn't there yet. Still, it's an interesting idea... Actually it would be nice to have the not-included PLs present in src/pl/ as their own directories with a README.TXT containing fetch and build instructions So we would have src/pl/plphp/README.TXT src/pl/pljava/README.TXT src/pl/plj/README.TXT and anybody looking for pl-s would find the info in a logical place *That* idea I like ... Actually taking that one step further. At least two or three of these projects have automated build processes (Ruby and Java, and I believe Python), placing their equivalent of Makefile into this dir would allow users versed in the language of choice to build their extension easily. Marc G. Fournier Hub.Org Networking Services (http:// www.hub.org) Email . [EMAIL PROTECTED] MSN . [EMAIL PROTECTED] Yahoo . yscrappy Skype: hub.orgICQ . 7615664 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] set search_path in dump output considered harmful
On 7/14/06, Tom Lane [EMAIL PROTECTED] wrote: [ problems with missing schema in dump ] No, not one of these things can be blamed on pg_dump. Ok, its not exactly bug but still a big annoyance that instead dumping fully qualified names it juggles with search path. And I'm annoyed as a user looking at the dump: * To see what schema the function/table/type is in, I need to carefully look several pages back until there is schema change. * I cannot copy-paste from dump. * The function contents must still contain fully-qualified names, so it cannot be said that the schema is not part of function definition in some way. Same goes for other objects. * The name 'search_path' for picking storage is sick :) I understand the backwards-compatibility angle, but the over-usage in dumps makes it even more sick... -- marko ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] [PATCHES] Proposed patch for contrib/cube
I have completed upgrading the cube codebase to use the V1 call protocol. However, before I submit a patch I would really like to work out why I have never been able to get cube to pass installcheck. This is what I get when I run installcheck on a clean version of 8.1.4 (I get similar results on CVS HEAD). The regression.diff is attached. josh:~/installs/postgresql-8.1.4/contrib/cube josh$ make installcheck make -C ../../src/test/regress pg_regress make[1]: `pg_regress' is up to date. /bin/sh ../../src/test/regress/pg_regress --dbname=contrib_regression cube (using postmaster on Unix socket, default port) == dropping database contrib_regression == DROP DATABASE == creating database contrib_regression == CREATE DATABASE == dropping regression test user accounts == == running regression test queries== test cube ... FAILED == 1 of 1 tests failed. == I have tried this both on OS X and Linux and get similar results. Josh Reich Neil Conway wrote: On Mon, 2006-07-17 at 17:55 -0400, Joshua Reich wrote: Ok. So, the cube code looks very unmaintained (not to offend anyone), but it is all in V0 and I believe make installcheck fails out of the box due to new error message formats. It passes for me with CVS HEAD. The cube regression test is also run as part of the buildfarm process, so it sounds like an error on your end. *** ./expected/cube_1.out Sun Jun 26 21:19:43 2005 --- ./results/cube.out Tue Jul 18 09:44:14 2006 *** *** 258,310 -- invalid input: parse errors SELECT ''::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at end of input SELECT 'ABC'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near A SELECT '()'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near ) SELECT '[]'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near ] SELECT '[()]'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near ) SELECT '[(1)]'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near ] SELECT '[(1),]'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near ] SELECT '[(1),2]'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near 2 SELECT '[(1),(2),(3)]'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near , SELECT '1,'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at end of input SELECT '1,2,'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at end of input SELECT '1,,2'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near , SELECT '(1,)'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near ) SELECT '(1,2,)'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near ) SELECT '(1,,2)'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near , -- invalid input: semantic errors and trailing garbage SELECT '[(1),(2)],'::cube AS cube; -- 0 ERROR: bad cube representation ! DETAIL: syntax error at or near , SELECT '[(1,2,3),(2,3)]'::cube AS cube; -- 1 ERROR: bad cube representation DETAIL: different point dimensions in (1,2,3) and (2,3) --- 258,310 -- invalid input: parse errors SELECT ''::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at end of input SELECT 'ABC'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at or near A SELECT '()'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at or near ) SELECT '[]'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at or near ] SELECT '[()]'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at or near ) SELECT '[(1)]'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at or near ] SELECT '[(1),]'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at or near ] SELECT '[(1),2]'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at or near 2 SELECT '[(1),(2),(3)]'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at or near , SELECT '1,'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at end of input SELECT '1,2,'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at end of input SELECT '1,,2'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at or near , SELECT '(1,)'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at or near ) SELECT '(1,2,)'::cube AS cube; ERROR: bad cube representation ! DETAIL:
Re: [HACKERS] contrib promotion?
On 7/14/06, Tom Lane [EMAIL PROTECTED] wrote: I don't see a strong need for moving pgcrypto into core, and there's at least one argument against it: if someone needs a crypto-free version of postgres for use someplace with benighted laws, they would be screwed. Image of hypothetical evil government is not a thing to base decisions on :) Although I've tried to develop pgcrypto to be easily mergable into core, I don't want to push it myself, the push should come from users. That said, there is one situation that is badly handled in current setup - storing passwords in database. There is md5() function in core and everything in /contrib in basically invisible in website and official docs. So even PG core devs suggest using md5() for this task. But this is inadequate - bruteforcing md5 hash can be done pretty easily on todays desktop computers. PostgreSQL itself can get away with it only because it regular users cant see the hash. But that is not so for ordinary apps. So I would like either some mention of the more useful/stable modules in core docs or a way for contrib modules to become 'official' add-on modules (like PL-s are). Full merge into core would fix this also, but indeed there is not many techical reasons for it. (And editing pg_proc.h is PITA - I'd consider it technical reason against it ;) -- marko ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] Windows buildfarm support, or lack of it
-Original Message- From: Petr Jelinek [mailto:[EMAIL PROTECTED] Sent: 16 July 2006 18:05 To: Dave Page Cc: PostgreSQL-development Subject: Re: Windows buildfarm support, or lack of it Dave Page wrote: I have spare licences for most versions of Windows as well, so if Microsoft's virtual server product is not too expensive for us I can probably add a few platform variations to that box. I'll look into it. In fact MS released Virtual PC 2004 for free a couple days ago (http://www.microsoft.com/windows/virtualpc/default.mspx) It seems that VMWare are following suit - their entry level VMWare Server is now free for Windows and Linux :-) http://www.vmware.com/products/server/ Regards, Dave. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] automatic system info tool?
Andrej Ricnik-Bay wrote: On 7/18/06, Bort, Paul [EMAIL PROTECTED] wrote: Mind you, maybe perl provides emulation for uname? Not that I know of. Wouldn't $^0 and $Config{archname} cover quite a few, though? No. As previously explained, these values reflect what was true when and where perl was compiled. In general, I need what is true at the time buildfarm is run. Anyway, I think we have some good possibilities. cheers andrew ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[HACKERS] password is no required, authentication is overridden
Hi, I already tried to mail [EMAIL PROTECTED] but my message was stalled. Version: 8.1.4 (binary for win32) OS: Windows XP SP2 I type: pg_dump -h localhost -U postgres database_name and there is no question for the password. I haven't made changes to pg_hba.conf. I'm logged in as user admin, not postgres, the password is not empty. my pg_hba.conf: # ... # TYPE DATABASE USER CIDR-ADDRESS METHOD # IPv4 local connections: host all all 127.0.0.1/32 md5 without the -h localhost, I get the same (complete) dump SHOW hba_file; gives me: d:/program files/postgreSQL/8.1/data/pg_hba.conf (I only have one file named pg_hba.conf on my computer. I tried again with pg_dump -U postgres -W database_name when I enter the right password it passes, with the wrong one it denies me, without the -W it lets me pass. thanks and bye, Thomas ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[HACKERS] Possible Typecasting Bug with coalesce()
Hello everyone, I found something that struck me as odd revolving around automatic typecasting and coalesce. It appears as though a timestamp will not automatically be cast to a timestamp if the timestamp is coalesced. Consider the following example: = select now()'Jul 14 2006 9:16:47AM'; ?column? -- t (1 row) = select now()coalesce('Jul 14 2006 9:16:47AM'); ?column? -- f (1 row) = select now()coalesce('Jul 14 2006 9:16:47AM')::timestamp with time zone; ?column? -- t (1 row) = select now()coalesce('Jul 14 2006 9:16:47AM'::timestamp with time zone); ?column? -- t (1 row) Just wanted to know if anyone was aware of this behavior and if it is correct. Thanks, Scott. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] plpython sets
Tino Wildenhain ha scritto: http://python.projects.postgresql.org/ This works very well for me - although it needs some more finish (docs and so on) maybe if more people using it it can get better. SRF - even lazy ones (e.g. generators) work nicely there. Regards Tino Wildenhain eheh... I wasn't able to build it... In the meantime some guy in the list provided me a setof enabled plpython.c. Now built and working fine! Do someone know why the back-end effort is duplicated? Thanks, Matteo Bertini begin:vcard fn:Matteo Bertini n:Bertini;Matteo email;internet:[EMAIL PROTECTED] tel;cell:+39(0)3284729474 note;quoted-printable:Ci sono 10 tipi di persone, quelle che capiscono il Binario e quelle chen= on lo capiscono.=0D=0A= OpenPGP: http://blog.naufraghi.net/openpgp=0D=0A= ICQ: 33956256 url:http://www.slug.it/naufraghi/ version:2.1 end:vcard ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[HACKERS] using constraint based paritioning to fix EAV type schemas? (view inheritance and constraints)
I have a client with the following EAV inspired schema. CREATE TABLE many_tables ( table_id text primary key,-- defines which virtual table is encoded attribute1 text, attribute2 text, attribute3 text, attribute4 text, ... ); I'd like to use a mix of constraint based paritioning, rules _and_views_ to implement a real schema underneath this mess, like the following. CREATE TABLE cat ( cat_id INTEGER PRIMARY KEY, cat_name TEXT NOT NULL, aloofness NUMERIC(1,3) DEFAULT 1.0 CHECK (0.0 = aloofness AND aloofness = 1.0) ); CREATE RULE many_tables_cat_insert AS ON INSERT TO many_tables WHERE table_id = 'cat' DO INSTEAD INSERT INTO cat (cat_id, cat_name, aloofness) VALUES ( CAST(attribute1 AS integer), attribute2, CAST(attribute3 AS numeric(1,3)) -- gleefully ignore the other attributes ); -- etc for UPDATE, and DELETE rules -- This part doesn't work CREATE VIEW many_tables_a (CHECK (table_id = 'cat')) INHERITS (many_tables) AS SELECT 'cat' AS table_id, CAST(cat_id AS text) AS attribute1, cat_name AS attribute2, CAST(aloofness AS text) AS attribute3, null AS attribute4, ... FROM cat; So, I guess I'm stuck doing the UNION ALL approach in this instance. This won't get me the partitioning win, nor the flexibility and cleanliness of design that I'd get with inheritance. As far as I can see, supporting the above would it mean adding inheritance and constraint support to views. Does anyone have a better approach? Drew ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] [PATCHES] Proposed patch for contrib/cube
What version of bison do you have? cheers andrew Joshua Reich wrote: I have completed upgrading the cube codebase to use the V1 call protocol. However, before I submit a patch I would really like to work out why I have never been able to get cube to pass installcheck. This is what I get when I run installcheck on a clean version of 8.1.4 (I get similar results on CVS HEAD). The regression.diff is attached. josh:~/installs/postgresql-8.1.4/contrib/cube josh$ make installcheck make -C ../../src/test/regress pg_regress make[1]: `pg_regress' is up to date. /bin/sh ../../src/test/regress/pg_regress --dbname=contrib_regression cube (using postmaster on Unix socket, default port) == dropping database contrib_regression == DROP DATABASE == creating database contrib_regression == CREATE DATABASE == dropping regression test user accounts == == running regression test queries== test cube ... FAILED == 1 of 1 tests failed. == I have tried this both on OS X and Linux and get similar results. Josh Reich Neil Conway wrote: On Mon, 2006-07-17 at 17:55 -0400, Joshua Reich wrote: Ok. So, the cube code looks very unmaintained (not to offend anyone), but it is all in V0 and I believe make installcheck fails out of the box due to new error message formats. It passes for me with CVS HEAD. The cube regression test is also run as part of the buildfarm process, so it sounds like an error on your end. *** ./expected/cube_1.out Sun Jun 26 21:19:43 2005 --- ./results/cube.out Tue Jul 18 09:44:14 2006 *** *** 258,310 -- invalid input: parse errors SELECT ''::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at end of input SELECT 'ABC'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near A SELECT '()'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near ) SELECT '[]'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near ] SELECT '[()]'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near ) SELECT '[(1)]'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near ] SELECT '[(1),]'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near ] SELECT '[(1),2]'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near 2 SELECT '[(1),(2),(3)]'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near , SELECT '1,'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at end of input SELECT '1,2,'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at end of input SELECT '1,,2'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near , SELECT '(1,)'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near ) SELECT '(1,2,)'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near ) SELECT '(1,,2)'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near , -- invalid input: semantic errors and trailing garbage SELECT '[(1),(2)],'::cube AS cube; -- 0 ERROR: bad cube representation ! DETAIL: syntax error at or near , SELECT '[(1,2,3),(2,3)]'::cube AS cube; -- 1 ERROR: bad cube representation DETAIL: different point dimensions in (1,2,3) and (2,3) --- 258,310 -- invalid input: parse errors SELECT ''::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at end of input SELECT 'ABC'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at or near A SELECT '()'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at or near ) SELECT '[]'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at or near ] SELECT '[()]'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at or near ) SELECT '[(1)]'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at or near ] SELECT '[(1),]'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at or near ] SELECT '[(1),2]'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at or near 2 SELECT '[(1),(2),(3)]'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at or near , SELECT '1,'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at end of input SELECT '1,2,'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at end of input SELECT '1,,2'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at or near , SELECT '(1,)'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error
[HACKERS] gBorg misbehaviour
Looks like gBorg has gone down... The Slony-I project does plan to move to pgFoundry, once 1.2 is released... http://slony-wiki.dbitech.ca/index.php/Move_to_PgFoundry_Checklist But we need to get to that point (1.2) first. Alas, gBorg being down today doesn't help :-(. -- (format nil [EMAIL PROTECTED] cbbrowne ntlug.org) http://cbbrowne.com/info/sap.html Success is something I will dress for when I get there, and not until. -- Unknown ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] [PATCHES] Proposed patch for contrib/cube
I upgraded to the latest version (2.3) as per the warning after running configure. Josh Andrew Dunstan wrote: What version of bison do you have? cheers andrew Joshua Reich wrote: I have completed upgrading the cube codebase to use the V1 call protocol. However, before I submit a patch I would really like to work out why I have never been able to get cube to pass installcheck. This is what I get when I run installcheck on a clean version of 8.1.4 (I get similar results on CVS HEAD). The regression.diff is attached. josh:~/installs/postgresql-8.1.4/contrib/cube josh$ make installcheck make -C ../../src/test/regress pg_regress make[1]: `pg_regress' is up to date. /bin/sh ../../src/test/regress/pg_regress --dbname=contrib_regression cube (using postmaster on Unix socket, default port) == dropping database contrib_regression == DROP DATABASE == creating database contrib_regression == CREATE DATABASE == dropping regression test user accounts == == running regression test queries== test cube ... FAILED == 1 of 1 tests failed. == I have tried this both on OS X and Linux and get similar results. Josh Reich Neil Conway wrote: On Mon, 2006-07-17 at 17:55 -0400, Joshua Reich wrote: Ok. So, the cube code looks very unmaintained (not to offend anyone), but it is all in V0 and I believe make installcheck fails out of the box due to new error message formats. It passes for me with CVS HEAD. The cube regression test is also run as part of the buildfarm process, so it sounds like an error on your end. *** ./expected/cube_1.outSun Jun 26 21:19:43 2005 --- ./results/cube.outTue Jul 18 09:44:14 2006 *** *** 258,310 -- invalid input: parse errors SELECT ''::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at end of input SELECT 'ABC'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near A SELECT '()'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near ) SELECT '[]'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near ] SELECT '[()]'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near ) SELECT '[(1)]'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near ] SELECT '[(1),]'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near ] SELECT '[(1),2]'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near 2 SELECT '[(1),(2),(3)]'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near , SELECT '1,'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at end of input SELECT '1,2,'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at end of input SELECT '1,,2'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near , SELECT '(1,)'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near ) SELECT '(1,2,)'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near ) SELECT '(1,,2)'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near , -- invalid input: semantic errors and trailing garbage SELECT '[(1),(2)],'::cube AS cube; -- 0 ERROR: bad cube representation ! DETAIL: syntax error at or near , SELECT '[(1,2,3),(2,3)]'::cube AS cube; -- 1 ERROR: bad cube representation DETAIL: different point dimensions in (1,2,3) and (2,3) --- 258,310 -- invalid input: parse errors SELECT ''::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at end of input SELECT 'ABC'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at or near A SELECT '()'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at or near ) SELECT '[]'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at or near ] SELECT '[()]'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at or near ) SELECT '[(1)]'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at or near ] SELECT '[(1),]'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at or near ] SELECT '[(1),2]'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at or near 2 SELECT '[(1),(2),(3)]'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at or near , SELECT '1,'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at end of input SELECT '1,2,'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at end of input SELECT '1,,2'::cube AS cube; ERROR: bad cube representation ! DETAIL:
Re: [HACKERS] [PATCHES] Proposed patch for contrib/cube
Then maybe you need to remove cubeparse.[ch] and try again. Bison changed the error message from parse error to syntax error in version 1.875 and it has been the same ever since, AFAIK. cheers andrew Joshua Reich wrote: I upgraded to the latest version (2.3) as per the warning after running configure. Josh Andrew Dunstan wrote: What version of bison do you have? cheers andrew Joshua Reich wrote: I have completed upgrading the cube codebase to use the V1 call protocol. However, before I submit a patch I would really like to work out why I have never been able to get cube to pass installcheck. This is what I get when I run installcheck on a clean version of 8.1.4 (I get similar results on CVS HEAD). The regression.diff is attached. josh:~/installs/postgresql-8.1.4/contrib/cube josh$ make installcheck make -C ../../src/test/regress pg_regress make[1]: `pg_regress' is up to date. /bin/sh ../../src/test/regress/pg_regress --dbname=contrib_regression cube (using postmaster on Unix socket, default port) == dropping database contrib_regression == DROP DATABASE == creating database contrib_regression == CREATE DATABASE == dropping regression test user accounts == == running regression test queries== test cube ... FAILED == 1 of 1 tests failed. == I have tried this both on OS X and Linux and get similar results. Josh Reich Neil Conway wrote: On Mon, 2006-07-17 at 17:55 -0400, Joshua Reich wrote: Ok. So, the cube code looks very unmaintained (not to offend anyone), but it is all in V0 and I believe make installcheck fails out of the box due to new error message formats. It passes for me with CVS HEAD. The cube regression test is also run as part of the buildfarm process, so it sounds like an error on your end. *** ./expected/cube_1.outSun Jun 26 21:19:43 2005 --- ./results/cube.outTue Jul 18 09:44:14 2006 *** *** 258,310 -- invalid input: parse errors SELECT ''::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at end of input SELECT 'ABC'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near A SELECT '()'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near ) SELECT '[]'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near ] SELECT '[()]'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near ) SELECT '[(1)]'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near ] SELECT '[(1),]'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near ] SELECT '[(1),2]'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near 2 SELECT '[(1),(2),(3)]'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near , SELECT '1,'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at end of input SELECT '1,2,'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at end of input SELECT '1,,2'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near , SELECT '(1,)'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near ) SELECT '(1,2,)'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near ) SELECT '(1,,2)'::cube AS cube; ERROR: bad cube representation ! DETAIL: syntax error at or near , -- invalid input: semantic errors and trailing garbage SELECT '[(1),(2)],'::cube AS cube; -- 0 ERROR: bad cube representation ! DETAIL: syntax error at or near , SELECT '[(1,2,3),(2,3)]'::cube AS cube; -- 1 ERROR: bad cube representation DETAIL: different point dimensions in (1,2,3) and (2,3) --- 258,310 -- invalid input: parse errors SELECT ''::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at end of input SELECT 'ABC'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at or near A SELECT '()'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at or near ) SELECT '[]'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at or near ] SELECT '[()]'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at or near ) SELECT '[(1)]'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at or near ] SELECT '[(1),]'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at or near ] SELECT '[(1),2]'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at or near 2 SELECT '[(1),(2),(3)]'::cube AS cube; ERROR: bad cube representation ! DETAIL: parse error at or near , SELECT '1,'::cube AS cube; ERROR:
Re: [HACKERS] missing tuplestore_gettuple
Tzahi Fadida [EMAIL PROTECTED] writes: In 8.2 the function tuplestore_gettuple in tuplestore.h is either missing or deprecated? can someone shed some light on this? You need to switch to using tuplestore_gettupleslot. Note that what comes back won't have any system columns, because the sort code now stores MinimalTuples instead of complete HeapTuples. regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: CSStorm occurred again by postgreSQL8.2. (Re: [HACKERS] poor performance with Context Switch Storm at TPC-W.)
Katsuhiko Okano [EMAIL PROTECTED] writes: It does not solve, even if it increases the number of NUM_SUBTRANS_BUFFERS. The problem was only postponed. Can you provide a reproducible test case for this? regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] [PATCHES] Proposed patch for contrib/cube
Joshua Reich [EMAIL PROTECTED] writes: Andrew Dunstan wrote: What version of bison do you have? I upgraded to the latest version (2.3) as per the warning after running configure. Better look again, because you definitely seem to be using an old bison to build cubeparse.c. regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Windows buildfarm support, or lack of it
Dave Page wrote: It seems that VMWare are following suit - their entry level VMWare Server is now free for Windows and Linux :-) http://www.vmware.com/products/server/ It has been for some time, at least in Beta, so I'm not sure who really went first. We should be supplying a sample applicance for VMware server that showcases our stuff - maybe the LiveCD project guys could do that? cheers andrew ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] password is no required, authentication is overridden
Are you sure you don't have a pgpass file, or the environment variable PGPASSWORD set? cheers andrew Thomas Bley wrote: Hi, I already tried to mail [EMAIL PROTECTED] but my message was stalled. Version: 8.1.4 (binary for win32) OS: Windows XP SP2 I type: pg_dump -h localhost -U postgres database_name and there is no question for the password. I haven't made changes to pg_hba.conf. I'm logged in as user admin, not postgres, the password is not empty. my pg_hba.conf: # ... # TYPE DATABASE USER CIDR-ADDRESS METHOD # IPv4 local connections: host all all 127.0.0.1/32 md5 without the -h localhost, I get the same (complete) dump http://www.postgresql.org/docs/current/static/sql-createtable.html SHOW hba_file; gives me: d:/program files/postgreSQL/8.1/data/pg_hba.conf (I only have one file named pg_hba.conf on my computer. I tried again with pg_dump -U postgres -W database_name when I enter the right password it passes, with the wrong one it denies me, without the -W it lets me pass. thanks and bye, Thomas ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] Possible Typecasting Bug with coalesce()
MotherMGA [EMAIL PROTECTED] writes: = select now()coalesce('Jul 14 2006 9:16:47AM'); The coalesce() function is going to resolve its datatype as text in this situation, and then text dominates timestamp in the comparison (ie, the result of now() is coerced to text). When you write = select now()'Jul 14 2006 9:16:47AM'; the literal's type doesn't have to be resolved until it's compared to now(), and that comparison is what gives the parser the hint that the literal ought to be considered to be a timestamp rather than just text. = select now()coalesce('Jul 14 2006 9:16:47AM'::timestamp with time zone); This is what you need to do if you want the literal to be treated as timestamp right off. The only bug I see here is that implicit coercions to text are a bad idea :-( --- IMHO it would be better if your first query failed instead of giving you unexpected behavior. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] password is no required, authentication is overridden
Thomas Bley wrote: I type: pg_dump -h localhost -U postgres database_name and there is no question for the password. Have you created a .pgpass (or whatever the equivilant in the Windows world is)? That could be supplying the password. I haven't made changes to pg_hba.conf. I'm logged in as user admin, not postgres, the password is not empty. my pg_hba.conf: # ... # TYPE DATABASE USER CIDR-ADDRESS METHOD # IPv4 local connections: host all all 127.0.0.1/32 md5 without the -h localhost, I get the same (complete) dump I'm not sure what the default behaviour for local socket connections is. Perhaps you should add a line to your pg_hba.conf to define a rule for local socket connections. For example local all all md5 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] [PATCHES] Proposed patch for contrib/cube
I just downgraded from 2.3 to 1.875 and rebuilt everything, and now it works fine. As per Andrew's previous email, in 2.3 the error messages were changed from parse error to syntax error, causing the diff to fail. Josh Tom Lane wrote: Joshua Reich [EMAIL PROTECTED] writes: Andrew Dunstan wrote: What version of bison do you have? I upgraded to the latest version (2.3) as per the warning after running configure. Better look again, because you definitely seem to be using an old bison to build cubeparse.c. regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly -- Joshua Reich Finance and Corporate Development ROOT Exchange, A Division of ROOT Markets 601 W. 26th St. / Suite 1500 New York, NY 10001 W - (212) 645 6320 x 7101 M / T - (646) 427 7959 E - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] password is no required, authentication is overridden
Andrew Hammond wrote: I'm not sure what the default behaviour for local socket connections is. Perhaps you should add a line to your pg_hba.conf to define a rule for local socket connections. For example local all all md5 He's on Windows - no local lines there. cheers andrew ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] password is no required, authentication is overridden
Hello Andrew, Thanks a lot! There is a pgpass file and it contians the password: D:\Documents and Settings\admin\Application Data\postgresql\pgpass.conf My installation is on: D:\Program Files\PostgreSQL\8.1\ Maybe it got automatically created by pgadmin3 ? Looking into the documentation delivered with pg (PostgreSQL and pgAdmin Help) there is a page about the pgpass file, but there is not hint to environment variable pgpassword or the pgpass-file on the page of pg_dump, maybe you can add it ? In the documentation page of psql there is a hint to the pgpass file. bye Thomas Andrew Dunstan wrote: Are you sure you don't have a pgpass file, or the environment variable PGPASSWORD set? cheers andrew Thomas Bley wrote: Hi, I already tried to mail [EMAIL PROTECTED] but my message was stalled. Version: 8.1.4 (binary for win32) OS: Windows XP SP2 I type: pg_dump -h localhost -U postgres database_name and there is no question for the password. I haven't made changes to pg_hba.conf. I'm logged in as user admin, not postgres, the password is not empty. my pg_hba.conf: # ... # TYPE DATABASE USER CIDR-ADDRESS METHOD # IPv4 local connections: host all all 127.0.0.1/32 md5 without the -h localhost, I get the same (complete) dump http://www.postgresql.org/docs/current/static/sql-createtable.html SHOW hba_file; gives me: d:/program files/postgreSQL/8.1/data/pg_hba.conf (I only have one file named pg_hba.conf on my computer. I tried again with pg_dump -U postgres -W database_name when I enter the right password it passes, with the wrong one it denies me, without the -W it lets me pass. thanks and bye, Thomas ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[HACKERS] does pgxs work?
I'm having trouble with pgxs and it seems like either it's doing something actively wrong which defeats the purpose of having pgxs or I'm doing something wrong in how I'm using it. I have postgres installed in /usr/local/pgsql in addition to having the debian packages installed in the usual places. My Makefile explicitly calls the development install: PGXS := $(shell /usr/local/pgsql/bin/pg_config --pgxs) Yet the results get definitions that try to pull in the regular install: # makefile (from `/r3/usr_local/pgsql/lib/pgxs/src/makefiles/../../src/Makefile.global', line 126) includedir := /usr/include/postgresql And it doesn't even get those right anyways: gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Winline -Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing -g -fpic -I. -I/usr/include/postgresql/server -I/usr/include/postgresql/internal -D_GNU_SOURCE -c -o test.o test.c -MMD test.c:19:22: error: postgres.h: No such file or directory test.c:20:18: error: fmgr.h: No such file or directory test.c:21:28: error: libpq/pqformat.h: No such file or directory test.c:22:28: error: utils/memutils.h: No such file or directory It seems like the point of pgxs is precisely to find the correct locations for these files. If it only works with standard locations then we could just hard code them in Makefiles. So I'm pretty sure this is supposed to work. What could I be doing wrong? -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] password is no required, authentication is overridden
There is a pgpass file and it contians the password: D:\Documents and Settings\admin\Application Data\postgresql\pgpass.conf My installation is on: D:\Program Files\PostgreSQL\8.1\ Maybe it got automatically created by pgadmin3 ? I'll bet you're right. Looking into the documentation delivered with pg (PostgreSQL and pgAdmin Help) there is a page about the pgpass file, but there is not hint to environment variable pgpassword or the pgpass-file on the page of pg_dump, maybe you can add it ? In the documentation page of psql there is a hint to the pgpass file. That's a good idea. What do you think would be a good way to say it? Drew ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [PATCHES] [HACKERS] 8.2 features?
If the use case is people running MySQL dumps, then there will be millions of values-targetlists in MySQL dumps. I did some experimentation just now, and could not get mysql to accept a command longer than about 1 million bytes. It complains about Got a packet bigger than 'max_allowed_packet' bytes which seems a bit odd because max_allowed_packet is allegedly set to 16 million, but anyway I don't think people are going to be loading any million-row tables using single INSERT commands in mysql either. regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[HACKERS] Progress bar updates
Has anyone looked thought about what it would take to get progress bars from clients like pgadmin? (Or dare I even suggest psql:) My first thought would be a message like CancelQuery which would cause the backend to peek into a static data structure and return a message that the client could parse and display something intelligent. Various commands would then stuff information into this data structure as they worked. For a first cut this data structure could just be a float between 0 and 1. Or perhaps it should be two integers, a current and an estimated final. That would let the client do more intelligent things when the estimates change for the length of the whole job. Later I could imagine elaborating into more complex structures for representing multi-step processes or even whole query plans. I also see it possibly being interesting to stuff this data structure into shared memory handled just like how Tom handled the current command. That would let you see the other queries running on the server, how long they've been running, and estimates for how long they'll continue to run. I would suggest starting with utility functions like index builds or COPY which would have to be specially handled anyways. Handling all optimizable queries in a single generic implementation seems like something to tackle only once the basic infrastructure is there and working for simple cases. Of course the estimates would be not much better than guesses. But if you want to say it's not worth having since they won't be perfectly accurate be prepared to swear that you've never looked at the % complete that modern ftp clients and web browsers display even though they too are, of course, widely inaccurate. They nonetheless provide some feedback the user desperately wants to be reassured that his job is making progress and isn't years away from finishing. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] Progress bar updates
+1 Looked, talked it through, it's a very important feature to BI users. - Luke Sent from my GoodLink synchronized handheld (www.good.com) -Original Message- From: Gregory Stark [mailto:[EMAIL PROTECTED] Sent: Tuesday, July 18, 2006 02:38 PM Eastern Standard Time To: pgsql-hackers@postgresql.org Subject:[HACKERS] Progress bar updates Has anyone looked thought about what it would take to get progress bars from clients like pgadmin? (Or dare I even suggest psql:) My first thought would be a message like CancelQuery which would cause the backend to peek into a static data structure and return a message that the client could parse and display something intelligent. Various commands would then stuff information into this data structure as they worked. For a first cut this data structure could just be a float between 0 and 1. Or perhaps it should be two integers, a current and an estimated final. That would let the client do more intelligent things when the estimates change for the length of the whole job. Later I could imagine elaborating into more complex structures for representing multi-step processes or even whole query plans. I also see it possibly being interesting to stuff this data structure into shared memory handled just like how Tom handled the current command. That would let you see the other queries running on the server, how long they've been running, and estimates for how long they'll continue to run. I would suggest starting with utility functions like index builds or COPY which would have to be specially handled anyways. Handling all optimizable queries in a single generic implementation seems like something to tackle only once the basic infrastructure is there and working for simple cases. Of course the estimates would be not much better than guesses. But if you want to say it's not worth having since they won't be perfectly accurate be prepared to swear that you've never looked at the % complete that modern ftp clients and web browsers display even though they too are, of course, widely inaccurate. They nonetheless provide some feedback the user desperately wants to be reassured that his job is making progress and isn't years away from finishing. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] url for TODO item, is it right?
Greg Sabino Mullane wrote: [ There is text before PGP section. ] -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 http://archives.postgresql.org/pgsql-www/2006-07/msg00061.php Yeah. I think the TODO item is intended to point to what is now http://archives.postgresql.org/pgsql-hackers/2006-02/msg00163.php or one of the earlier messages in that thread. This is a very ugly problem. Note that there are also URLs that cannot be changed, such as older messages that point to archive posts, and many places on the web outside of our control. Why can't we just write a script that creates new numbers as needed, such as msg00163.1.php and msg00163.2.php? As far as I can tell, there is nothing magical about the naming schema itself that would cause such URLs to break anything. Agreed. It is nice to have the emails numbered in arrival order, but changes to old URLs are worse. -- Bruce Momjian [EMAIL PROTECTED] EnterpriseDBhttp://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] url for TODO item, is it right?
Tom Lane wrote: Michael Fuhr [EMAIL PROTECTED] writes: On Mon, Jul 17, 2006 at 12:25:09AM -0500, Jaime Casanova wrote: i found this on the Monitoring section: o Allow protocol-level BIND parameter values to be logged http://archives.postgresql.org/pgsql-hackers/2006-02/msg00165.php But i don't understand why that thread is related to the TODO item, i'm missing something? Possibly the message renumbering that Tom griped about: http://archives.postgresql.org/pgsql-www/2006-07/msg00061.php Yeah. I think the TODO item is intended to point to what is now http://archives.postgresql.org/pgsql-hackers/2006-02/msg00163.php or one of the earlier messages in that thread. Perhaps when Bruce realizes he needs to recheck every link in the TODO files, he'll get on the warpath with me ;-) (Sorry, just catching up on this issue.) Yes, I can fix the TODO item URLs, but many email messages reference URLs themselves: http://archives.postgresql.org/pgsql-patches/2006-06/msg00096.php The URL in the email actually works, but I am sure others do not. Because we don't have control over the email contents (think Google), I don't think we can renumber old email items without a continual stream of complaints from users. -- Bruce Momjian [EMAIL PROTECTED] EnterpriseDBhttp://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] src/tools/pginclude considered harmful (was Re:
Good, added to pginclude/README: Also, tests should be done with configure settings of --enable-cassert and EXEC_BACKEND on and off. I think we had more problems this time just because our code is more complex. --- Tom Lane wrote: Bruce Momjian [EMAIL PROTECTED] writes: FYI, 527 include were removed from non-header C files in this run. That is not something that can be easily done manually. It's not so easily done automatically, either :-(. I'm not sure why this go-round was so much more painful than the last, but it very clearly exposed the deficiencies in your testing process. Mainly, that you tested only one set of configure options on one platform. I would say that minimum requirements for doing this again in future are (1) test with and without --enable-cassert, and (2) test with and without EXEC_BACKEND. We *know* that both those things change the set of headers required. It might be necessary to actually test the WIN32 port separately --- EXEC_BACKEND is close but not the same. regards, tom lane -- Bruce Momjian [EMAIL PROTECTED] EnterpriseDBhttp://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Progress bar updates
-Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Gregory Stark Sent: 18 July 2006 19:36 To: pgsql-hackers@postgresql.org Subject: [HACKERS] Progress bar updates For a first cut this data structure could just be a float between 0 and 1. Or perhaps it should be two integers, a current and an estimated final. That would let the client do more intelligent things when the estimates change for the length of the whole job. Hi Greg, I would vote for the latter so that we could give more meaningful feedback - for example, when vacuuming you might give a scale of 0 to num tables. In cases such as COPY where you mightn't have any idea of an upper bound, then a simple heartbeat could be supplied so at least the client could count rows (or 100's of rows) processed or whatever. It would certainly allow us to present a nicer user experience in pgAdmin :-) Regards, Dave. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] plpython sets
Ühel kenal päeval, E, 2006-07-17 kell 23:22, kirjutas Matteo Bertini: Tino Wildenhain ha scritto: http://python.projects.postgresql.org/ This works very well for me - although it needs some more finish (docs and so on) maybe if more people using it it can get better. SRF - even lazy ones (e.g. generators) work nicely there. Regards Tino Wildenhain eheh... I wasn't able to build it... In the meantime some guy in the list provided me a setof enabled plpython.c. Now built and working fine! Do someone know why the back-end effort is duplicated? http://python.projects.postgresql.org/ seems to be aiming at a much larger python/postgres integration scheme than src/pl/python. and it has taken a different and more pythonic approach at many places which is nice but takes a lot longer to mature. -- Hannu Krosing Database Architect Skype Technologies OÜ Akadeemia tee 21 F, Tallinn, 12618, Estonia Skype me: callto:hkrosing Get Skype for free: http://www.skype.com ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] [SQL] using constraint based paritioning to fix EAV type schemas? (view inheritance and constraints)
On 7/18/06, Aaron Bono [EMAIL PROTECTED] wrote: On 18 Jul 2006 09:07:08 -0700, Andrew Hammond [EMAIL PROTECTED] wrote: I have a client with the following EAV inspired schema.CREATE TABLE many_tables (table_id text primary key,-- defines which virtual table isencodedattribute1 text,attribute2 text, attribute3 text,attribute4 text,...);I'd like to use a mix of constraint based paritioning, rules_and_views_ to implement a real schema underneath this mess, like thefollowing. CREATE TABLE cat ( cat_id INTEGER PRIMARY KEY, cat_name TEXT NOT NULL, aloofness NUMERIC(1,3) DEFAULT 1.0 CHECK (0.0 = aloofness ANDaloofness = 1.0));CREATE RULE many_tables_cat_insert AS ON INSERT TO many_tables WHERE table_id = 'cat' DO INSTEADINSERT INTO cat (cat_id, cat_name, aloofness) VALUES ( CAST(attribute1 AS integer), attribute2, CAST(attribute3 AS numeric(1,3)) -- gleefully ignore the other attributes);-- etc for UPDATE, and DELETE rules-- This part doesn't workCREATE VIEW many_tables_a (CHECK (table_id = 'cat')) INHERITS(many_tables) ASSELECT 'cat' AS table_id, CAST(cat_id AS text) AS attribute1,cat_name AS attribute2,CAST(aloofness AS text) AS attribute3,null AS attribute4, ...FROM cat;So, I guess I'm stuck doing the UNION ALL approach in this instance. This won't get me the partitioning win, nor the flexibility andcleanliness of design that I'd get with inheritance.As far as I can see, supporting the above would it mean addinginheritance and constraint support to views. Does anyone have a better approach?If you don't mind having redundant data, you could change the ON INSERT trigger to copy the data into cat, add an ON UPDATE trigger (you should do this anyway) and revoke direct insert/update to cat. Then you don't need many_tables_a or a UNION. There's already a performance problem, I suspect that would just exacerbate it. Since I want to encourage developers to use the relational tables instead of the many_tables table, refusing DML wouldn't be a good idea. Of course I don't know if this achieves your intended result or not. What is your business requirement for this? The current virtual table design has performance (as well as maitenance) issues. Performance tuning is problematic. A major re-design of the application is not something that can be done until the next major release. However, if I can slide a proper relational schema underneath this hodge-podge table while retaining compatability for legacy apps then it becomes possible to fix parts of the application to use the relational tables incrementally on an as-needed basis. If I could get constraint based exclusioning to work with the partitioning, then I would be able to realize performance improvements immediately (which is always good for a consultant). Unfortunately I don't see any way to do this. Inheritance seems to fit backwards from what I'm actually trying to do. I've seen a few EAV designs in practice. They've all been problematic. I'd like to have a better way of dealing with them. Which is why I'm tentatively suggesting support for inheritance and constraints in views. If there's some other way to achieve constraint based exclusion across a UNION of heterogenous tables, I'd love to hear it. Drew
[HACKERS] contrib/hstore - missing, deleted or not material for contrib ?
If looking at http://www.sai.msu.su/~megera/oddmuse/index.cgi?Hstore the following is displayed : - License Stable version, included into PostgreSQL distribution, released under BSD license. Development version, available from this site, released under the GNU General Public License, version 2 (June 1991). - But I never found it in my contrib dir - is it just me that overlooked it and have it been removed or has it never been included ? I would really like it to be included for ease of installation on linux systems - if pgxs will fix this so it can be compiled as a standalone extension then it is a solution I can live with. Best regards, Nicolai Petri ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] [SQL] using constraint based paritioning to fix EAV
Ühel kenal päeval, T, 2006-07-18 kell 16:44, kirjutas Andrew Hammond: On 7/18/06, Aaron Bono [EMAIL PROTECTED] wrote: On 18 Jul 2006 09:07:08 -0700, Andrew Hammond [EMAIL PROTECTED] wrote: I have a client with the following EAV inspired schema. CREATE TABLE many_tables ( table_id text primary key,-- defines which virtual table is encoded attribute1 text, attribute2 text, attribute3 text, attribute4 text, ... ); Maybe you can approach the problem from another end, and make the many_tables table the virtual one and all the others into real tables ? -- Hannu Krosing Database Architect Skype Technologies OÜ Akadeemia tee 21 F, Tallinn, 12618, Estonia Skype me: callto:hkrosing Get Skype for free: http://www.skype.com ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] password is no required, authentication is overridden
Andrew Hammond wrote: Looking into the documentation delivered with pg (PostgreSQL and pgAdmin Help) there is a page about the pgpass file, but there is not hint to environment variable pgpassword or the pgpass-file on the page of pg_dump, maybe you can add it ? In the documentation page of psql there is a hint to the pgpass file. That's a good idea. What do you think would be a good way to say it? see + for my text: = pg_dump ... -U username Connect as the given user -W Force a password prompt. This should happen automatically if the server requires password authentication. + You can also place your password in a .pgpass file in order to avoid any inputs (see Section 28.12, “The Password File”). Environment PGDATABASE PGHOST PGPORT PGUSER Default connection parameters. Diagnostics pg_dump internally executes SELECT statements. If you have problems running pg_dump, make sure you are able to select information from the database using, for example, psql. Notes + In order to create nightly backups it might be necessary to run the authentication automatically. + Adding the password as a parameter to pg_dump is not supported because this appears in the process list, available to all local users. + Instead you can place your credentials in a .pgpass file (see Section 28.12, “The Password File”). If your database cluster has any local additions to the template1 database, be careful to restore the ... = 28.12. The Password File ... chmod 0600 ~/.pgpass. If the permissions are less strict than this, the file will be ignored. (The file permissions are not currently checked on Microsoft Windows, however.) + The .pgpass file will be automatically created if you're using pgAdmin III with store password being enabled in the connection settings. bye Thomas ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [PATCHES] [HACKERS] 8.2 features?
from http://dev.mysql.com/doc/refman/4.1/en/blob.html You can change the message buffer size by changing the value of the max_allowed_packet variable, but you must do so for both the server and your client program. For example, both mysql and mysqldump allow you to change the client-side max_allowed_packet value. Tom Lane wrote: If the use case is people running MySQL dumps, then there will be millions of values-targetlists in MySQL dumps. I did some experimentation just now, and could not get mysql to accept a command longer than about 1 million bytes. It complains about Got a packet bigger than 'max_allowed_packet' bytes which seems a bit odd because max_allowed_packet is allegedly set to 16 million, but anyway I don't think people are going to be loading any million-row tables using single INSERT commands in mysql either. regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] password is no required, authentication is overridden
Thomas Bley wrote: + The .pgpass file will be automatically created if you're using pgAdmin III with store password being enabled in the connection settings. It strikes me that this is actually a bad thing for pgadmin3 to be doing. It should use its own file, not the deafult location, at least if the libpq version is = 8.1. We provided the PGPASSFILE environment setting just so programs like this could use alternative locations for the pgpass file. Otherwise, it seems to me we are violating the POLS, as in the case of this user who not unnaturally thought he had found a major security hole. cheers andrew ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] password is no required, authentication is overridden
or maybe split the file up into sections like this example: [pgadmin3] localhost:5432:*:postgres:post [pg_dump] [psql] bye Thomas Andrew Dunstan wrote: Thomas Bley wrote: + The .pgpass file will be automatically created if you're using pgAdmin III with store password being enabled in the connection settings. It strikes me that this is actually a bad thing for pgadmin3 to be doing. It should use its own file, not the deafult location, at least if the libpq version is = 8.1. We provided the PGPASSFILE environment setting just so programs like this could use alternative locations for the pgpass file. Otherwise, it seems to me we are violating the POLS, as in the case of this user who not unnaturally thought he had found a major security hole. cheers andrew ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] using constraint based paritioning to fix EAV
Hannu Krosing wrote: Ühel kenal päeval, T, 2006-07-18 kell 16:44, kirjutas Andrew Hammond: On 7/18/06, Aaron Bono [EMAIL PROTECTED] wrote: On 18 Jul 2006 09:07:08 -0700, Andrew Hammond [EMAIL PROTECTED] wrote: I have a client with the following EAV inspired schema. CREATE TABLE many_tables ( table_id text primary key,-- defines which virtual table is encoded attribute1 text, attribute2 text, attribute3 text, attribute4 text, ... ); Maybe you can approach the problem from another end, and make the many_tables table the virtual one and all the others into real tables ? The many_tables table already exists. It works about as poorly as you'd expect. My goal is to migrate away from this attrocious design. My question is how best to do that. I'd like to take advantage of table partitioning and constraint based exclusion, but I can't figure out how to do it since the underlying tables are heterogenous in nature. Perhaps I'm going about this the wrong way though. I think I'll partition based on table_id and leave the native storage in the crappy text fields. Then create views of what should be the real, relational tables. That lets the developers migrate and (I hope) eases the performance burden somewhat. Once the code migration is complete I can finalize the data move. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[HACKERS] feature request: pg_dump --view
I could really use a --view option to pg_dump (and pg_restore, i imagine). This would instruct pg_dump to dump just a named view, much like --table, --trigger, --function, etc. Could this maybe be added to a todo list? ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] [SQL] using constraint based paritioning to fix EAV
Ühel kenal päeval, K, 2006-07-19 kell 00:20, kirjutas Hannu Krosing: Ühel kenal päeval, T, 2006-07-18 kell 16:44, kirjutas Andrew Hammond: On 7/18/06, Aaron Bono [EMAIL PROTECTED] wrote: On 18 Jul 2006 09:07:08 -0700, Andrew Hammond [EMAIL PROTECTED] wrote: I have a client with the following EAV inspired schema. CREATE TABLE many_tables ( table_id text primary key,-- defines which virtual table is encoded attribute1 text, attribute2 text, attribute3 text, attribute4 text, ... ); Maybe you can approach the problem from another end, and make the many_tables table the virtual one and all the others into real tables ? Oops, I see you kind of are :) Maybe you can solve some of the problems doing a VIEW over pl/SQL function ? And/or maybe using triggers instead of rules. -- Hannu Krosing Database Architect Skype Technologies OÜ Akadeemia tee 21 F, Tallinn, 12618, Estonia Skype me: callto:hkrosing Get Skype for free: http://www.skype.com ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] [COMMITTERS] pgsql: Done: o -Display IN, INOUT, and OUT
David Fetter wrote: On Tue, Jul 18, 2006 at 03:20:51PM -0300, Bruce Momjian wrote: Log Message: --- Done: o -Display IN, INOUT, and OUT parameters in \df Oops! Should I change the patch to say IN when it's an IN parameter? Right now, it stays silent. [ Moved to hackers.] No idea. Comments? -- Bruce Momjian [EMAIL PROTECTED] EnterpriseDBhttp://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] url for TODO item, is it right?
Tom Lane wrote: Michael Fuhr [EMAIL PROTECTED] writes: On Mon, Jul 17, 2006 at 12:25:09AM -0500, Jaime Casanova wrote: i found this on the Monitoring section: o Allow protocol-level BIND parameter values to be logged http://archives.postgresql.org/pgsql-hackers/2006-02/msg00165.php But i don't understand why that thread is related to the TODO item, i'm missing something? Possibly the message renumbering that Tom griped about: http://archives.postgresql.org/pgsql-www/2006-07/msg00061.php Yeah. I think the TODO item is intended to point to what is now http://archives.postgresql.org/pgsql-hackers/2006-02/msg00163.php or one of the earlier messages in that thread. Perhaps when Bruce realizes he needs to recheck every link in the TODO files, he'll get on the warpath with me ;-) (Sorry, just catching up on this issue.) Yes, I can fix the TODO item URLs, but many email messages reference URLs themselves: http://archives.postgresql.org/pgsql-patches/2006-06/msg00096.php The URL in the email actually works, but I am sure others do not. Because we don't have control over the email contents (think Google), I don't think we can renumber old email items without a continual stream of complaints from users. I always wonder why we don't add a unique id to each email message itself. I mean let the mail list program to add sequencial number to each email's subject. For example, emails pgsql-jp ML (PostgreSQL general discussion in Japanese managed by JPUG) have subject headers like this: [pgsql-jp: 34814] pgpool 2.5 released By using this method our TODO list can referer emais logical id (34814 in this case) which is independent on archive URL. Archive URL is something like a phisical id and maybe changed accidentaly and is not convenitent for this kind of usage IMO. -- Tatsuo Ishii SRA OSS, Inc. Japan ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] [COMMITTERS] pgsql: Done: o -Display IN, INOUT,
Bruce Momjian wrote: David Fetter wrote: On Tue, Jul 18, 2006 at 03:20:51PM -0300, Bruce Momjian wrote: Log Message: --- Done: o -Display IN, INOUT, and OUT parameters in \df Oops! Should I change the patch to say IN when it's an IN parameter? Right now, it stays silent. [ Moved to hackers.] No idea. Comments? Actually, I'd be inclined to put IN if there are INOUT or OUT parameters, and nothing if they are all IN parameters, but I realise this might seem inconsistent. cheers andrew ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] [COMMITTERS] pgsql: Done: o -Display IN, INOUT, and OUT parameters in \df
On Tue, Jul 18, 2006 at 06:07:01PM -0400, Bruce Momjian wrote: David Fetter wrote: On Tue, Jul 18, 2006 at 03:20:51PM -0300, Bruce Momjian wrote: Log Message: --- Done: o -Display IN, INOUT, and OUT parameters in \df Oops! Should I change the patch to say IN when it's an IN parameter? Right now, it stays silent. [ Moved to hackers.] No idea. Comments? I'm thinking no. Parameters by default are unnamed IN parameters, so I exposed and marked things which differed from that. :) Cheers, D -- David Fetter [EMAIL PROTECTED] http://fetter.org/ phone: +1 415 235 3778AIM: dfetter666 Skype: davidfetter Remember to vote! ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [PATCHES] [HACKERS] 8.2 features?
On Tue, Jul 18, 2006 at 02:19:01PM -0400 I heard the voice of Tom Lane, and lo! it spake thus: I did some experimentation just now, and could not get mysql to accept a command longer than about 1 million bytes. It complains about Got a packet bigger than 'max_allowed_packet' bytes which seems a bit odd because max_allowed_packet is allegedly set to 16 million, but anyway I don't think people are going to be loading any million-row tables using single INSERT commands in mysql either. On the contrary, I've hit it several times by just trying to import [into another database] the output of a mysqldump I just did. Great design, that... -- Matthew Fuller (MF4839) | [EMAIL PROTECTED] Systems/Network Administrator | http://www.over-yonder.net/~fullermd/ On the Internet, nobody can hear you scream. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] feature request: pg_dump --view
On Tue, Jul 18, 2006 at 03:48:52PM -0700, Neil Conway wrote: On Tue, 2006-07-18 at 18:06 -0400, Phil Frost wrote: I could really use a --view option to pg_dump (and pg_restore, i imagine). pg_dump -t view_name will work. -Neil Oh, you got me all worked up. I was reading this as a way to dump the CONTENTS of a view not the DEFINITION of a view. I thought someone sneaked in pg_dump of a query in there. --elein ---(end of broadcast)--- TIP 6: explain analyze is your friend ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] feature request: pg_dump --view
elein wrote: pg_dump -t view_name will work. Oh, you got me all worked up. I was reading this as a way to dump the CONTENTS of a view not the DEFINITION of a view. I thought someone sneaked in pg_dump of a query in there. How would you load such a dump, since views are by default select-only? cheers andrew ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] [COMMITTERS] pgsql: Done: o -Display IN, INOUT, and OUT parameters in \df
David Fetter [EMAIL PROTECTED] writes: On Tue, Jul 18, 2006 at 06:07:01PM -0400, Bruce Momjian wrote: [ Moved to hackers.] No idea. Comments? I'm thinking no. Parameters by default are unnamed IN parameters, so I exposed and marked things which differed from that. :) I agree; the patch already makes \df more noisy than some would wish. regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Progress bar updates
Gregory Stark wrote: Has anyone looked thought about what it would take to get progress bars from clients like pgadmin? (Or dare I even suggest psql:) Some weeks ago I proposed a PROGRESS parameter for COPY, to enable progress feedback via notices. tgl thinks nobody needs that... Regards, Andreas ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] password is no required, authentication is overridden
From: Andrew Dunstan Thomas Bley wrote: + The .pgpass file will be automatically created if you're using pgAdmin III with store password being enabled in the connection settings. It strikes me that this is actually a bad thing for pgadmin3 to be doing. It should use its own file, not the deafult location, at least if the libpq version is = 8.1. We provided the PGPASSFILE environment setting just so programs like this could use alternative locations for the pgpass file. Otherwise, it seems to me we are violating the POLS, as in the case of this user who not unnaturally thought he had found a major security hole. Ummm, The function which pgAdmin offers is the optimal in present. I do not think that PGPASSFILE avoids the danger clearly. Probably, It is easy for the user who is malicious in the change to find it. I consider it to be a problem that the password is finally PlainText. Then, I made the proposal before. However, It was indicated that deliberation is required again. I want to consider a good method again. Is there any proposal with good someone? Regards, Hiroshi Saito ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] gBorg misbehaviour
I mentioned this on -general, I believe, but gborg is on one of our last FreeBSD 4.x servers right now ... I have to scheduale moving it to the FreeBSD 6.x machines ... neptune had some problems early this morning, and the 4.x fsck took 9 @@[EMAIL PROTECTED]@# hours to run ;( On Tue, 18 Jul 2006, Chris Browne wrote: Looks like gBorg has gone down... The Slony-I project does plan to move to pgFoundry, once 1.2 is released... http://slony-wiki.dbitech.ca/index.php/Move_to_PgFoundry_Checklist But we need to get to that point (1.2) first. Alas, gBorg being down today doesn't help :-(. -- (format nil [EMAIL PROTECTED] cbbrowne ntlug.org) http://cbbrowne.com/info/sap.html Success is something I will dress for when I get there, and not until. -- Unknown ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings Marc G. Fournier Hub.Org Networking Services (http://www.hub.org) Email . [EMAIL PROTECTED] MSN . [EMAIL PROTECTED] Yahoo . yscrappy Skype: hub.orgICQ . 7615664 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] url for TODO item, is it right?
On Tue, 18 Jul 2006, Bruce Momjian wrote: Greg Sabino Mullane wrote: [ There is text before PGP section. ] -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 http://archives.postgresql.org/pgsql-www/2006-07/msg00061.php Yeah. I think the TODO item is intended to point to what is now http://archives.postgresql.org/pgsql-hackers/2006-02/msg00163.php or one of the earlier messages in that thread. This is a very ugly problem. Note that there are also URLs that cannot be changed, such as older messages that point to archive posts, and many places on the web outside of our control. Why can't we just write a script that creates new numbers as needed, such as msg00163.1.php and msg00163.2.php? As far as I can tell, there is nothing magical about the naming schema itself that would cause such URLs to break anything. Agreed. It is nice to have the emails numbered in arrival order, but changes to old URLs are worse. 'k, so is the concensus here that I regenerate everything with the 'broken msg seperator', and then revert to the unbroken one for new stuff? its no sweat, I just fear this is going to re-crop up sometime in the future if we ever have to regenerate from the mbox files, as well have some in 'broken format' and some in the 'unbroken', but renumbering *then* will still affect everything ... Basically, we're just differing the headaches to a later date when we have no choice :( Marc G. Fournier Hub.Org Networking Services (http://www.hub.org) Email . [EMAIL PROTECTED] MSN . [EMAIL PROTECTED] Yahoo . yscrappy Skype: hub.orgICQ . 7615664 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Progress bar updates
On Tue, 2006-07-18 at 14:35 -0400, Gregory Stark wrote: My first thought would be a message like CancelQuery which would cause the backend to peek into a static data structure and return a message that the client could parse and display something intelligent. I'm not quite sure what you're suggesting; presumably you'd need to open another client connection to send the status report message to a backend (since a backend will not be polling its input socket during query execution). That just seems like the wrong approach -- stashing a backend's current status into shared memory sounds more promising, IMHO, and won't require changes to the FE/BE protocol. I would suggest starting with utility functions like index builds or COPY which would have to be specially handled anyways. Handling all optimizable queries in a single generic implementation seems like something to tackle only once the basic infrastructure is there and working for simple cases. Of course the estimates would be not much better than guesses. Estimating query progress for DDL should be reasonably doable, but I think it would require some hard thought to get even somewhat accurate estimates for SELECT queries -- and I'm not sure there's much point doing this if we don't at least have an idea how we might implement reasonably accurate progress reporting for every kind of query. This paper is worth a read: Gang Luo, Jeffrey F.Naughton, Curt Ellmann and Michael Watzke: Toward a Progress Indicator for Database Queries. SIGMOD Conference 2004: 791-802. Interestingly, they apparently implemented a prototype using PostgreSQL. -Neil ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] RESET CONNECTION?
Bruce Momjian wrote: Mario Weilguni wrote: Will this patch make it into 8.2? http://archives.postgresql.org/pgsql-patches/2004-12/msg00228.php It's a really nice feature, would be extremly useful with tools like pgpool. No, it will not because RESET CONNECTION can mess up interface code that doesn't want the connection reset. We are not sure how to handle that. Hmm, what interface code are you talking about? -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] url for TODO item, is it right?
Tatsuo Ishii [EMAIL PROTECTED] writes: Because we don't have control over the email contents (think Google), I don't think we can renumber old email items without a continual stream of complaints from users. I always wonder why we don't add a unique id to each email message itself. But that doesn't do anything to fix the immediate problem ... regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] feature request: pg_dump --view
On Tue, 2006-07-18 at 18:06 -0400, Phil Frost wrote: I could really use a --view option to pg_dump (and pg_restore, i imagine). pg_dump -t view_name will work. -Neil ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] Progress bar updates
Neil Conway [EMAIL PROTECTED] writes: I'm not quite sure what you're suggesting; presumably you'd need to open another client connection to send the status report message to a backend (since a backend will not be polling its input socket during query execution). That just seems like the wrong approach -- stashing a backend's current status into shared memory sounds more promising, IMHO, and won't require changes to the FE/BE protocol. Yeah, I was about to make the same comment. The new support for query status in shared memory should make it pretty cheap to update a progress indicator there, and then it'd be trivial to expose the indicator to other backends via pg_stat_activity. Sending the progress info directly to the connected client implies protocol changes (fairly trivial ones) and client changes (possibly highly nontrivial ones --- think about how you'd get the info out through something like a webserver application with multiple layers of software in the way). In practice, if a query is taking long enough for this feature to be interesting, making another connection and looking to see what's happening is not a problem, and it's likely to be the most practical way anyway for many clients. regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] [PATCHES] pg_regress in C
Magnus Hagander [EMAIL PROTECTED] writes: Per discussion at the conference: In order to run the regression tests on Windows without msys, pg_regress needs to be reimplemnted in C. Patch committed after significant further work. As committed, pg_regress.c is pretty nearly an exact replacement for the shell script; the only significant deviation is that the --temp_install switch's argument is required not optional. (This is because our homegrown version of getopt_long doesn't allow optional arguments. Maybe that should be fixed sometime.) There is one possibly important loose end: the shell script makes an effort to catch signals and shut down the temp postmaster before exiting, while there's no such code in the C version. I'm not sure if it's necessary. At least on my machine, if you type control-C while the tests are running then the kernel sends SIGINT to everything that's part of the terminal's process group, which will include the postmaster --- so the shutdown happens anyway. I have no idea if that'll work on Windows... One reason I didn't try to do this is I'm a bit hesitant to write a signal handler that does anything as interesting as a system() call, which would seem to be necessary to duplicate what the shell script did. Comments? regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [PATCHES] [HACKERS] 8.2 features?
I did some experimentation just now, and could not get mysql to accept a command longer than about 1 million bytes. It complains about Got a packet bigger than 'max_allowed_packet' bytes which seems a bit odd because max_allowed_packet is allegedly set to 16 million, but anyway I don't think people are going to be loading any million-row tables using single INSERT commands in mysql either. Strange. Last time I checked I thought MySQL dump used 'multivalue lists in inserts' for dumps, for the same reason that we use COPY ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [PATCHES] [HACKERS] 8.2 features?
I did some experimentation just now, and could not get mysql to accept a command longer than about 1 million bytes. It complains about Got a packet bigger than 'max_allowed_packet' bytes which seems a bit odd because max_allowed_packet is allegedly set to 16 million, but anyway I don't think people are going to be loading any million-row tables using single INSERT commands in mysql either. Ah no, I'm mistaken. It's not by default in mysqldump, but it does seem recommended. This is from man mysqldump: -e|--extended-insert Allows utilization of the new, much faster INSERT syntax. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [PATCHES] [HACKERS] 8.2 features?
Christopher Kings-Lynne [EMAIL PROTECTED] writes: Strange. Last time I checked I thought MySQL dump used 'multivalue lists in inserts' for dumps, for the same reason that we use COPY I think Andrew identified the critical point upthread: they don't try to put an unlimited number of rows into one INSERT, only a megabyte or so's worth. Typical klugy-but-effective mysql design approach ... regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] url for TODO item, is it right?
Marc G. Fournier wrote: 'k, so is the concensus here that I regenerate everything with the 'broken msg seperator', and then revert to the unbroken one for new stuff? its no sweat, I just fear this is going to re-crop up sometime in the future if we ever have to regenerate from the mbox files, as well have some in 'broken format' and some in the 'unbroken', but renumbering *then* will still affect everything ... One idea is to keep the old archives using the broken separator, and regenerate the good archives in some other directory, or with a different prefix (say, instead of msg0234.php have it be named mesg0234.php or msg0234.1.php, etc). That way the old URLs would continue to work, and there wouldn't be a problem if the archives need to be regenerated in the future. OTOH it would be good to have a collection of messages by Message-Id which could be used as a permalink. For example http://archives.postgresql.org/by-id/[EMAIL PROTECTED] or something like that. -- Alvaro Herrerahttp://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Progress bar updates
Andreas, Some weeks ago I proposed a PROGRESS parameter for COPY, to enable progress feedback via notices. tgl thinks nobody needs that... Well, *Tom* doesn't need it. What mechanism did you propose to make this work? -- Josh Berkus PostgreSQL @ Sun San Francisco ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq