Re: [HACKERS] pg_regress breaks on msys
Hi Tom-san. This is very strange.!! $ make -s In file included from preproc.y:6668: pgc.c: In function `yylex': pgc.c:1539: warning: label `find_rule' defined but not used C:/MinGW/include/ctype.h: At top level: pgc.c:3724: warning: `yy_flex_realloc' defined but not used initdb.c: In function `locale_date_order': initdb.c:2163: warning: `%x' yields only last 2 digits of year in some locales pg_backup_tar.c: In function `_tarAddFile': pg_backup_tar.c:1052: warning: comparison is always false due to limited range of data type All of PostgreSQL successfully made. Ready to install. $ make check make -C ../../../src/port all make[1]: Entering directory `/home/hi-saito/postgresql-8.2devel-20060720/src/port' make[1]: Nothing to be done for `all'. make[1]: Leaving directory `/home/hi-saito/postgresql-8.2devel-20060720/src/port' make -C ../../../contrib/spi refint.dll autoinc.dll make[1]: Entering directory `/home/hi-saito/postgresql-8.2devel-20060720/contrib/spi' make[1]: `refint.dll' is up to date. make[1]: `autoinc.dll' is up to date. make[1]: Leaving directory `/home/hi-saito/postgresql-8.2devel-20060720/contrib/spi' rm -rf ./testtablespace mkdir ./testtablespace ./pg_regress --temp-install=./tmp_check --top-builddir=../../.. --temp-port=55432 --schedule=./parallel_schedule --multibyte=SQL_ASCII --load-language=plpgsql == creating temporary installation== == initializing database system == == starting postmaster== running on port 55432 with pid 1964 == creating database regression == CREATE DATABASE ALTER DATABASE == installing plpgsql == CREATE LANGUAGE == running regression test queries== parallel group (13 tests): text oid varchar char name float4 int2 boolean int8 int4 float8 bit numeric boolean ... ok char ... diff command failed with status 1: diff -w ./expected/char.out ./results/char.out ./results/char.diff server stopped make: *** [check] Error 2 However, $ ls -l results/char.diff ls: results/char.diff: No such file or directory Um $ diff -w ./expected/char.out ./results/char.out 66d65 | A 71c70 (5 rows) --- (4 rows) 79d77 | A 84c82 (6 rows) --- (5 rows) 90a89 | A 92c91 (1 row) --- (2 rows) 99a99 | A 101c101 (2 rows) --- (3 rows) $ diff -w ./expected/char.out ./results/char.out ./results/char.diff $ ls -l results/char.diff -rw-r--r--1 hi-saito pgsql 204 Jul 20 15:23 results/char.diff [EMAIL PROTECTED] ~/postgresql-8.2devel-20060720/src/test/regress $ cat results/char.diff 66d65 | A 71c70 (5 rows) --- (4 rows) 79d77 | A 84c82 (6 rows) --- (5 rows) 90a89 | A 92c91 (1 row) --- (2 rows) 99a99 | A 101c101 (2 rows) --- (3 rows) Futhermore, tracking is required. Regards, Hiroshi Saito ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] float8 regression failure (HEAD, cygwin)
Adrian Maier schrieb: Hello, While setting up a buildfarm installation for cygwin, I've uncountered the following regression failure : float8 ... FAILED == pgsql.3132/src/test/regress/regression.diffs *** ./expected/float8-small-is-zero.outTue Jul 18 09:24:52 2006 --- ./results/float8.outTue Jul 18 09:53:42 2006 *** *** 13,29 SELECT '-10e400'::float8; ERROR: -10e400 is out of range for type double precision SELECT '10e-400'::float8; ! float8 ! ! 0 ! (1 row) ! SELECT '-10e-400'::float8; ! float8 ! ! -0 ! (1 row) ! -- bad input INSERT INTO FLOAT8_TBL(f1) VALUES (''); ERROR: invalid input syntax for type double precision: --- 13,21 SELECT '-10e400'::float8; ERROR: -10e400 is out of range for type double precision SELECT '10e-400'::float8; ! ERROR: 10e-400 is out of range for type double precision SELECT '-10e-400'::float8; ! ERROR: -10e-400 is out of range for type double precision -- bad input INSERT INTO FLOAT8_TBL(f1) VALUES (''); ERROR: invalid input syntax for type double precision: *** *** 377,383 --- 369,377 INSERT INTO FLOAT8_TBL(f1) VALUES ('-10e400'); ERROR: -10e400 is out of range for type double precision INSERT INTO FLOAT8_TBL(f1) VALUES ('10e-400'); + ERROR: 10e-400 is out of range for type double precision INSERT INTO FLOAT8_TBL(f1) VALUES ('-10e-400'); + ERROR: -10e-400 is out of range for type double precision -- maintain external table consistency across platforms -- delete all values and reinsert well-behaved ones DELETE FROM FLOAT8_TBL; = This happening on cygwin 1.5.20 (running on top of winXP), gcc 3.4.4. The entire check.log can be found here : http://www.newsoftcontrol.ro/~am/pgfarm/check.log The other logs generated by the buildfarm can be found here: http://www.newsoftcontrol.ro/~am/pgfarm/ Thanks, Which postgresql version? Can we have a regular cygwin error report please mailed to cygwin at cygwin.com please. See http://cygwin.com/problems.html (cygcheck -s -v -r cygcheck.out) Looks like a strtod() newlib feature, but I haven't inspected closely. http://sources.redhat.com/ml/newlib/2006/msg00020.html BTW: HAVE_LONG_LONG_INT_64 is defined, so INT64_IS_BUSTED is defined also. Does it look the same on redhat fedora? Our buildfarm doesn't have these issues, this runs gcc-3.3.3 and gcc-3.4.4 The problem I see is that float8in() in src/backend/utils/adt/float.c checks only for -Infinity and not -inf as returned by newlib: pg_strncasecmp(num, -Infinity, 9) == 0 Can you test this? $ cat test-strtod.c #include stdlib.h #include stdio.h #include errno.h #include float.h double d; char *tail; int main() { errno = 0; d = strtod(-10e400, tail); printf(double: %f, errno: %d, tail: '%s', isinf: %d, fabs: %f, infmax: %d , d, errno, tail, isinf(d), fabs(d), fabs(d) DBL_MAX ? 1 : 0); } $ gcc test-strtod.c; ./a double: -inf, errno: 34, tail: '', isinf: 1, fabs: inf, infmax: 1 -- Reini Urban - postgresql-cygwin maintainer http://phpwiki.org/ http://murbreak.at/ http://helsinki.at/ http://spacemovie.mur.at/ ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] float8 regression failure (HEAD, cygwin)
On 20/07/06, Reini Urban [EMAIL PROTECTED] wrote: Thanks, Which postgresql version? The version is cvs HEAD. Can we have a regular cygwin error report please mailed to cygwin at cygwin.com please. See http://cygwin.com/problems.html (cygcheck -s -v -r cygcheck.out) Looks like a strtod() newlib feature, but I haven't inspected closely. http://sources.redhat.com/ml/newlib/2006/msg00020.html BTW: HAVE_LONG_LONG_INT_64 is defined, so INT64_IS_BUSTED is defined also. Does it look the same on redhat fedora? Our buildfarm doesn't have these issues, this runs gcc-3.3.3 and gcc-3.4.4 I'm not sure about fedora, but on NetBSD 3.0rc5 , postgresql 8.1.3 I can see the same behaviour: am=# select version(); version PostgreSQL 8.1.3 on i386--netbsdelf, compiled by GCC gcc (GCC) 3.3.3 (NetBSD nb3 20040520) (1 row) am=# SELECT '-10e400'::float8; ERROR: -10e400 is out of range for type double precision The problem I see is that float8in() in src/backend/utils/adt/float.c checks only for -Infinity and not -inf as returned by newlib: pg_strncasecmp(num, -Infinity, 9) == 0 Can you test this? Sure . $ gcc test-strtod.c; ./a double: -inf, errno: 34, tail: '', isinf: 1, fabs: inf, infmax: 1 Yes, I'm getting the same output (both on cygwin and netbsd 3.0rc5): $ ./test-strtod.exe double: -inf, errno: 34, tail: '', isinf: 1, fabs: inf,infmax: 1 Cheers, Adrian Maier ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Progress bar updates
We already have EXPLAIN ANALYZE. Perhaps the right way to do this is something that provides similar output. I could see something that looks like EXPLAIN for the parts that have not yet executed, something reasonable to show progress of the currently active part of the plan (current time, rows, loops), and EXPLAIN ANALYZE output for the parts which have been completed. Now this is something that would really help testing a system, by dynamically seeing the plans of queries which run too long. That combined with the ability to see the values of bind parameters would be a useful debug aid. Cheers, Csaba. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: CSStorm occurred again by postgreSQL8.2. (Re: [HACKERS] poor
If there is a work load tool of a free license, I would like to try. FYI: there is a free tpc-w implementation done by Jan available at: http://pgfoundry.org/projects/tpc-w-php/ FYI(2): There is one more (pseudo) TPC-W implementation by OSDL. http://www.osdl.org/lab_activities/kernel_testing/osdl_database_test_suite/osdl_dbt-1/ Thank you for the information. I'll try it. Regards, Katsuhiko Okano okano katsuhiko _at_ oss ntt co jp ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] poor performance with Context Switch Storm at TPC-W.
Jim C. Nasby wrote: If you haven't changed checkpoint timeout, this drop-off every 4-6 minutes indicates that you need to make the bgwriter more aggressive. I'll say to a customer when proposing and explaining. Thank you for the information. Regards, Katsuhiko Okano okano katsuhiko _at_ oss ntt co jp ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[HACKERS] Units in postgresql.conf
One frequent source of confusion are the different units that the parameters in postgresql.conf use. shared_buffers is in 8 kB, work_mem is in 1 kB; bgwriter_delay is in milliseconds, checkpoint_warning is in seconds. Obviously, we can't change that without inconveniencing a lot of users. I think it would be useful to allow units to be added to these settings, for example shared_buffers = 1000kB checkpoint_warning = 30s This would also allow shared_buffers = 512MB which is a bit cumbersome to calculate right now (you'd need = 65536). I haven't thought yet how to parse or implement this, but would people find this useful? -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] Units in postgresql.conf
On 7/20/06, Peter Eisentraut [EMAIL PROTECTED] wrote: I think it would be useful to allow units to be added to these settings, for example shared_buffers = 1000kB checkpoint_warning = 30s This would also allow shared_buffers = 512MB which is a bit cumbersome to calculate right now (you'd need = 65536). I haven't thought yet how to parse or implement this, but would people find this useful? I agree, a lot of newbies have issues with the configuration file. I have a tiny bit of code (about 20 lines I think) that will handle K, M, and G suffixes for memory. It would be equally easy to add S for seconds, In my code, if no suffix existed, I'd just revert to the default behavior. This is probably what we'd want to do in PostgreSQL as well. The only issue in PostgreSQL is knowing what the unit conversion and scaling factor is for each parameter (8K, 1K, milliseconds, etc); though, this wouldn't be hard to add at all. -- Jonah H. Harris, Software Architect | phone: 732.331.1300 EnterpriseDB Corporation| fax: 732.331.1301 33 Wood Ave S, 2nd Floor| [EMAIL PROTECTED] Iselin, New Jersey 08830| http://www.enterprisedb.com/ ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Units in postgresql.conf
On Thu, 20 Jul 2006, Peter Eisentraut wrote: One frequent source of confusion are the different units that the parameters in postgresql.conf use. shared_buffers is in 8 kB, work_mem is in 1 kB; bgwriter_delay is in milliseconds, checkpoint_warning is in seconds. Obviously, we can't change that without inconveniencing a lot of users. I think it would be useful to allow units to be added to these settings, for example shared_buffers = 1000kB checkpoint_warning = 30s This would also allow shared_buffers = 512MB which is a bit cumbersome to calculate right now (you'd need = 65536). I haven't thought yet how to parse or implement this, but would people find this useful? Please! Yes! Gavin ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] Units in postgresql.conf
On 7/20/06, Peter Eisentraut [EMAIL PROTECTED] wrote: One frequent source of confusion are the different units that the parameters in postgresql.conf use. shared_buffers is in 8 kB, work_mem is in 1 kB; bgwriter_delay is in milliseconds, checkpoint_warning is in seconds. Obviously, we can't change that without inconveniencing a lot of users. I think it would be useful to allow units to be added to these settings, for example shared_buffers = 1000kB checkpoint_warning = 30s This would also allow shared_buffers = 512MB which is a bit cumbersome to calculate right now (you'd need = 65536). I haven't thought yet how to parse or implement this, but would people find this useful? +1. In addition, that would make conffile self-documenting. -- marko ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Units in postgresql.conf
Peter Eisentraut wrote: One frequent source of confusion are the different units that the parameters in postgresql.conf use. shared_buffers is in 8 kB, work_mem is in 1 kB; bgwriter_delay is in milliseconds, checkpoint_warning is in seconds. Obviously, we can't change that without inconveniencing a lot of users. I think it would be useful to allow units to be added to these settings, for example snip I haven't thought yet how to parse or implement this, but would people find this useful? +1 I'd find this useful myself, and I think it would eliminate many mistakes by newer admins. Joe ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Units in postgresql.conf
On Thursday 20 July 2006 05:04, Jonah H. Harris wrote: On 7/20/06, Peter Eisentraut [EMAIL PROTECTED] wrote: I think it would be useful to allow units to be added to these settings, for example shared_buffers = 1000kB checkpoint_warning = 30s This would also allow shared_buffers = 512MB which is a bit cumbersome to calculate right now (you'd need = 65536). I haven't thought yet how to parse or implement this, but would people find this useful? I agree, a lot of newbies have issues with the configuration file. I have a tiny bit of code (about 20 lines I think) that will handle K, M, and G suffixes for memory. It would be equally easy to add S for seconds, In my code, if no suffix existed, I'd just revert to the default behavior. This is probably what we'd want to do in PostgreSQL as well. The only issue in PostgreSQL is knowing what the unit conversion and scaling factor is for each parameter (8K, 1K, milliseconds, etc); though, this wouldn't be hard to add at all. Yummy, Yummy, I'd say this would be a big boost in ability to tune for a lot of people. -- Darcy Buskermolen Wavefire Technologies Corp. http://www.wavefire.com ph: 250.717.0200 fx: 250.763.1759 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] lastval exposes information that currval does not
OK, text again updated: For schemas, allows access to objects contained in the specified schema (assuming that the objects' own privilege requirements are also met). Essentially this allows the grantee to quotelook up/ objects within the schema. Without this permission, it is still possible to see the object names, e.g. by querying the system tables. Also, after revoking this permission, existing backends might have statements that have previously performed this lookup, so this is not a completely secure way to prevent object access. --- Phil Frost wrote: On Wed, Jul 19, 2006 at 02:42:49PM -0400, Bruce Momjian wrote: Updated text: For schemas, allows access to objects contained in the specified schema (assuming that the objects' own privilege requirements are also met). Essentially this allows the grantee to quotelook up/ objects within the schema. Without this permission, it is still possible to see the object names, e.g. by querying the system tables, so this is not a completely secure way to prevent object access. I think you are not understanding the nature of the problem I have described. It is just not the names that can be accessed in the absence of usage on a schema, it is *the content of the relations*. It is obvious to anyone who has ever looked in pg_* that metadata is not hidden by any amount of permission twiddling with grant and revoke. This isn't great from a security standpoint, but at least it's documented and expected, so one can design around it. However, the usage on schema privilege has undocumented, unexpected behavior. One would think from the documentation and from experimentation that one can not exercise any privileges on an object (excepting what can be done through the system catalogs) without having usage on the schema that contains it. However, this is not always the case! If you look at my previous posts, I have repeatedly demonstrated ways to access objects (not just the names or metadata, but the _full_ _contents_) contained in a schema to which one does not have the 'usage' privilege. The developers must consider this a feature, because none have acknowledged it as a security bug. This being the case, it is important that people be advised that the schema usage privilege does not always control access to contained objects, and that the ways which it can be bypassed are perhaps not numerous, but definitely subtle, and thus likely to escape security audits and later be discovered by an attacker. It should be known that the PostgreSQL developers have recently added a function lastval() which newly exposes such a way to bypass the check, and that this has not been officially acknowledged as a security flaw. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match -- Bruce Momjian [EMAIL PROTECTED] EnterpriseDBhttp://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] pg_regress breaks on msys
Hiroshi Saito [EMAIL PROTECTED] writes: This is very strange.!! boolean ... ok char ... diff command failed with status 1: diff -w ./expected/char.out ./results/char.out ./results/char.diff server stopped Yes, I believe the problem is that our Windows versions of the WIFEXITED/WEXITSTATUS macros are wrong. pg_regress is trying to verify that the diff call didn't fail entirely (eg, diff not there or failed to read one of the files), but the status code diff is returning for files not the same is confusing it. Can anyone check into what the result status conventions really are on Windows? I am tempted to change the macros to just swap the bytes, but I dunno what that will do to their existing usages to check the result of pclose() or win32_waitpid(). regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] [PERFORM] Sun Donated a Sun Fire T2000 to the PostgreSQL
Tatsuo Ishii [EMAIL PROTECTED] writes: 18% in s_lock is definitely bad :-(. Were you able to determine which LWLock(s) are accounting for the contention? Sorry for the delay. Finally I got the oprofile data. It's huge(34MB). If you are interested, I can put somewhere. Please let me know. I finally got a chance to look at this, and it seems clear that all the traffic is on the BufMappingLock. This is essentially the same problem we were discussing with respect to Gavin Hamill's report of poor performance on an 8-way IBM PPC64 box (see hackers archives around 2006-04-21). If your database is fully cached in shared buffers, then you can do a whole lot of buffer accesses per unit time, and even though all the BufMappingLock acquisitions are in shared-LWLock mode, the LWLock's spinlock ends up being heavily contended on an SMP box. It's likely that CVS HEAD would show somewhat better performance because of the btree change to cache local copies of index metapages (which eliminates a fair fraction of buffer accesses, at least in Gavin's test case). Getting much further than that seems to require partitioning the buffer mapping table. The last discussion stalled on my concerns about unpredictable shared memory usage, but I have some ideas on that which I'll post separately. In the meantime, thanks for sending along the oprofile data! regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] pg_regress breaks on msys
From: Tom Lane Hiroshi Saito [EMAIL PROTECTED] writes: This is very strange.!! boolean ... ok char ... diff command failed with status 1: diff -w ./expected/char.out ./results/char.out ./results/char.diff server stopped Yes, I believe the problem is that our Windows versions of the WIFEXITED/WEXITSTATUS macros are wrong. pg_regress is trying to verify that the diff call didn't fail entirely (eg, diff not there or failed to read one of the files), but the status code diff is returning for files not the same is confusing it. Probably No, I also suspected it in the beginning. However, char.diff was not created by some strange condition. I think that WIFEXITED showed the strange condition. In the place which I showed above, diff makes char.diff from the Manual operation. I expect that it is related to a system() call. I am investigating in some other environments. now... However, It does not clarify yet..:-( Regards, Hiroshi Saito ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] Units in postgresql.conf
Peter, One frequent source of confusion are the different units that the parameters in postgresql.conf use. shared_buffers is in 8 kB, work_mem is in 1 kB; bgwriter_delay is in milliseconds, checkpoint_warning is in seconds. Obviously, we can't change that without inconveniencing a lot of users. I think it would be useful to allow units to be added to these settings, for example shared_buffers = 1000kB checkpoint_warning = 30s This would also allow shared_buffers = 512MB which is a bit cumbersome to calculate right now (you'd need = 65536). I haven't thought yet how to parse or implement this, but would people find this useful? Well, it's on my TODO list for 8.2 to write a simple postgresql.conf conversion utility in Perl. If you wanted to make a change like that, it would make finishing that mandatory. Just as well, right now half the vacuum settings are in a different section than another half. --Josh ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] Units in postgresql.conf
Zdenek Kotala wrote: Time units is easy: 1h = 60min = 3600s = 360ms We don't need anything larger than seconds at the moment. What kind of unit prefix will we use for memory? PostgreSQL has always used 1 kB = 1024 B. 1) will be unit required? No. What will be default unit for value without unit? What we have now. I suggest mandatory unit avoid the problem with unclear default value. Not going to happen. -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[HACKERS] Real Time Data Base using Postgre
Hello, My Name is Moisés Herrera Vázquez, I begin to work with PostgreSQL, for make a Real Data Base for Process Control. I have some question for know what modules I have to change for theses points How can I Add some kind of priorities to one transaction, for concurrency transaction? What module I have to change for this? How can I put some time constraints to a transaction?. Postmaster daemon run a under or kernel levels? Posgre SQL server porcess run a under or kernel levels? Can I have any way of interact with server process from a Kernel module? Can Postgre Interact with some open RealTime System, like RTlinux, RTAI,. Etc. Thanks, M.Sc.Moisés Herrera Vázquez Especialista Superior en Automática Empresa de Automatización Integral, CEDAI Sucursal Villa Clara. Teléfono: (53) 42 -203311 ___
[HACKERS] Transaction Speed
Can any body talk me how many transactions make postgres in a second? For example Inserts, Update, delete, etc. Im very interesting in this data, because I want to use postgres for a real time database for process control. Thanks and regards M.Sc.Moisés Herrera Vázquez Especialista Superior en Automática Empresa de Automatización Integral, CEDAI Sucursal Villa Clara. Teléfono: (53) 42 -203311 ___
Re: [HACKERS] Units in postgresql.conf
Josh Berkus wrote: Well, it's on my TODO list for 8.2 to write a simple postgresql.conf conversion utility in Perl. If you wanted to make a change like that, it would make finishing that mandatory. I don't understand how that is related. Or what a conversion utility would be for that matter. -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Units in postgresql.conf
Josh Berkus wrote: Well, the main issue with changing the units of the PostgreSQL.conf file from a user perspective is that the numbers from you 8.0/8.1 conf file would no longer work. No one is intending to do any such change. -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Units in postgresql.conf
Peter, I don't understand how that is related. Or what a conversion utility would be for that matter. Well, the main issue with changing the units of the PostgreSQL.conf file from a user perspective is that the numbers from you 8.0/8.1 conf file would no longer work. A little conversion utilitily to turn your 8.0 file into an 8.2 file would help solve that. -- --Josh Josh Berkus PostgreSQL @ Sun San Francisco ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] contrib promotion?
On 7/14/2006 12:01 PM, Tom Lane wrote: tsearch2 is functionality that definitely should be in core eventually, but even Oleg still says it's not done. Aside from the documentation issue, it's not clear that we've got a stable API for it. Would moving it in its current state into core help it to get there and what would the risk be if it is added and remains fragile for a release? If the problems are wrong positives/negatives on search results, then it is IMHO okay. If the problems are process crashes or the like, then it is not. Jan -- #==# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #== [EMAIL PROTECTED] # ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] Progress bar updates
This is how netezza and teradata do it and it works very well. In each of their cases you can see a graphical representation of the plan with progress for each stage. For the command line it would be great to just dump the current status, which would provide a snapshot of the explain analyze. - Luke Sent from my GoodLink synchronized handheld (www.good.com) -Original Message- From: Csaba Nagy [mailto:[EMAIL PROTECTED] Sent: Thursday, July 20, 2006 04:52 AM Eastern Standard Time To: Andrew Hammond Cc: postgres hackers Subject:Re: [HACKERS] Progress bar updates We already have EXPLAIN ANALYZE. Perhaps the right way to do this is something that provides similar output. I could see something that looks like EXPLAIN for the parts that have not yet executed, something reasonable to show progress of the currently active part of the plan (current time, rows, loops), and EXPLAIN ANALYZE output for the parts which have been completed. Now this is something that would really help testing a system, by dynamically seeing the plans of queries which run too long. That combined with the ability to see the values of bind parameters would be a useful debug aid. Cheers, Csaba. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] float8 regression failure (HEAD, cygwin)
Reini Urban [EMAIL PROTECTED] writes: BTW: HAVE_LONG_LONG_INT_64 is defined, so INT64_IS_BUSTED is defined also. You sure? INT64_IS_BUSTED should *not* be set in that case --- it's only supposed to be set if we couldn't find any 64-bit-int type at all. As for the regression test failure, it's odd because it looks to me that the actual test output is an exact match to the default float8.out file. I'm not sure why pg_regress chose to report a diff against float8-small-is-zero.out instead. This may be another teething pain of the new pg_regress-in-C code --- could you trace through it and see what's happening? regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Units in postgresql.conf
On Thu, Jul 20, 2006 at 01:49:36PM +0200, Peter Eisentraut wrote: One frequent source of confusion are the different units that the parameters in postgresql.conf use. shared_buffers is in 8 kB, work_mem is in 1 kB; bgwriter_delay is in milliseconds, checkpoint_warning is in seconds. Obviously, we can't change that without inconveniencing a lot of users. I think it would be useful to allow units to be added to these settings, for example shared_buffers = 1000kB checkpoint_warning = 30s This would also allow shared_buffers = 512MB which is a bit cumbersome to calculate right now (you'd need = 65536). I haven't thought yet how to parse or implement this, but would people find this useful? Absolutely! :) Cheers, D -- David Fetter [EMAIL PROTECTED] http://fetter.org/ phone: +1 415 235 3778AIM: dfetter666 Skype: davidfetter Remember to vote! ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Units in postgresql.conf
Peter Eisentraut wrote: One frequent source of confusion are the different units that the parameters in postgresql.conf use. shared_buffers is in 8 kB, work_mem is in 1 kB; bgwriter_delay is in milliseconds, checkpoint_warning is in seconds. Obviously, we can't change that without inconveniencing a lot of users. I think it would be useful to allow units to be added to these settings, for example shared_buffers = 1000kB checkpoint_warning = 30s This would also allow shared_buffers = 512MB which is a bit cumbersome to calculate right now (you'd need = 65536). I haven't thought yet how to parse or implement this, but would people find this useful? It is good idea. I going to implement this. There is some proposal: Time units is easy: 1h = 60min = 3600s = 360ms Memory units: What kind of unit prefix will we use for memory? 1kB=1000B and 1kiBi=1024B or 1kB=1024kB. See http://en.wikipedia.org/wiki/Binary_prefix for detail. I suggest use IEC standard convention. By my opinion it is much better. And there are some other questions: 1) will be unit required? What will be default unit for value without unit? I suggest mandatory unit avoid the problem with unclear default value. 2) Each internal representation of setting use different unit. Shell I convert this representation to milliseconds and bytes? I think it is not good idea. It should generate some overflow. I suggest to recompute value and round it to integer. Zdenek ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Units in postgresql.conf
Peter Eisentraut wrote: Zdenek Kotala wrote: Time units is easy: 1h = 60min = 3600s = 360ms We don't need anything larger than seconds at the moment. What kind of unit prefix will we use for memory? PostgreSQL has always used 1 kB = 1024 B. 1) will be unit required? No. What will be default unit for value without unit? What we have now. I suggest mandatory unit avoid the problem with unclear default value. Not going to happen. Ok. Conclusion is for time s=second, ms=millisecond and for memory B, kB, MB, GB. Unit is not mandatory and if it will missing the behavior stays same - backward compatibility (no extra conversion utility). Last question is if page unit should be useful too. For example: #shared_buffers = 1000 # min 16 or max_connections*2, 8KB each It means 8000kB. But if somebody compiles postgres with different page size, than the size will be different. However, somebody should use for example 8MB and number of buffers will be 8MB/page_size. Zdenek PS: I have some GUC patches in the patches queue. Could anybody test/commit them? I would like continue on the latest version of guc subsystem. Thanks ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Transaction Speed
This depends on your server capability and performance.. You can use PostgreSQL as real time database. It is real not a toy :-) Adnan DURSUNASRIN Bilisim Ltd. Turkey - Original Message - From: moises To: pgsql-hackers@postgresql.org Sent: Thursday, July 20, 2006 3:36 PM Subject: [HACKERS] Transaction Speed Can any body talk me how many transactions make postgres in a second? For example Inserts, Update, delete, etc. Im very interesting in this data, because I want to use postgres for a real time database for process control. Thanks and regards M.Sc.Moisés Herrera Vázquez Especialista Superior en Automática Empresa de Automatización Integral, CEDAI Sucursal Villa Clara. Teléfono: (53) 42 -203311 ___
Re: [HACKERS] Transaction Speed
moises wrote: Can any body talk me how many transactions make postgres in a second? It depends on many things 1) speed of hardware/OS/number of disks/type of disks, if you use RAID or not ... 2) number concurrent access 3) size of processed data in one transaction 4) database model ... It not possible determine. You must take some machine and perform some own test. I’m very interesting in this data, because I want to use postgres for a real time database for process control. I'm not sure if postgres is good database for real time application. You need know maximal response time for realtime application and I afraid that postgres is not good choice for RTA. (MySQL, Firebird, Oracle ... have same limitation ...) Zdenek ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[HACKERS] Using EXPLAIN in regressions?
Is it safe to use the output of EXPLAIN in regression tests? I want to make sure that certain GiST indexes are being used by sample queries, but I am not sure if it is safe to rely on the format of EXPLAIN to be unchanging. Thoughts? Josh Reich ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] Units in postgresql.conf
Peter Eisentraut wrote: I think it would be useful to allow units to be added to these settings, for example... shared_buffers = 512MB which is a bit cumbersome to calculate right now (you'd need = 65536). I haven't thought yet how to parse or implement this, but would people find this useful? Would this extend to things like random_page_cost and similar? If the random_page_cost were specifiable in seconds or ms it might be easier to someday write a program to measure such values on particular hardware platforms. (though I guess for that to work, the config file would also need to add the reference cost (is it a non-random page access) as well...) ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Using EXPLAIN in regressions?
Joshua Reich [EMAIL PROTECTED] writes: Is it safe to use the output of EXPLAIN in regression tests? No, not unless you want the test to break every other week. I want to make sure that certain GiST indexes are being used by sample queries, About the best bet is to make sure that's the *only* available index, and set enable_seqscan = off to be sure. In some cases, you can use the ordering of the returned rows as a proxy --- if they're returned in something other than heap order then it must have used an index. I'm not sure if this will be a stable answer for GIST indexes though. regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Using EXPLAIN in regressions?
On Thu, 2006-07-20 at 18:19 -0400, Tom Lane wrote: About the best bet is to make sure that's the *only* available index, and set enable_seqscan = off to be sure. Another approach would be to define a UDF that takes a query string, runs the parser, rewriter, and planner on the string and then checks various properties of the resulting Plan (e.g. that it includes a GiST index scan). -Neil ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [PATCHES] [HACKERS] 8.2 features?
Tom Lane wrote: Joe Conway [EMAIL PROTECTED] writes: I was actually just looking at that and ended up thinking that it might be better to deal with it one level down in ExecProject (because it is already passing targetlists directly to ExecTargetList). I'd vote against that, because (a) ExecProject is used by all executor node types, and we shouldn't add overhead to all of them for the benefit of one; (b) ExecProject doesn't actually have any internal state at the moment. To keep track of which targetlist to evaluate next, it would not only need some internal state, it would have to be told the current es_direction. This stuff fits much better at the exec node level --- again, I'd suggest looking at Append for a comparison. OK. But really the executor part of this is not the hard part; what we need to think about first is what's the impact on the Query datastructure that the parser/rewriter/planner use. After a quick look, I think changing Query.targetList is too big an impact, and probably unneeded given your suggestion below. One of the problems with the current code is that the targetList in the VALUES... case is being used for two purposes -- 1) to define the column types, and 2) to hold the actual data. By putting the data into a new node type, I think the targetList reverts to being just a list of datatypes as it is with INSERT ... SELECT ... I'm still liking the idea of pushing multi-values into a jointree node type. Basically this would suggest representing VALUES ... as if it were SELECT * FROM VALUES ... (which I believe is actually legal syntax per spec) --- in the general case you'd need to have a Query node that has a trivial col1, col2, col3, ... targetlist and then the multiple values lists are in some kind of jointree entry. But possibly this could be short-circuited somehow, at least for INSERT. I'm liking this too. But when you say jointree node, are you saying to model the new node type after NestLoop/MergeJoin/HashJoin nodes? These are referred to as join nodes in ExecInitNode. Or as you mentioned a couple of times, should this look more like an Append node? Thanks, Joe ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] plPHP and plRuby
Tom Lane wrote: Peter Eisentraut [EMAIL PROTECTED] writes: And if that didn't convince you, I still got PL/sh in the wait ... It seems like there may be enough interest in PL/Ruby to justify including it in our distro, but after taking a look at the package I can see a couple of pretty serious objections: 1. Wrong license. Unless the author can be persuaded to relicense as straight BSD, this discussion is a waste of time. As I mentioned previously, I have already negotiated for a relicense. 2. Coding style. The man does not believe in comments; do we really think anyone else is going to be able to maintain his work? Yes, this was one of my concerns. Sincerely, Joshua D. Drake regards, tom lane -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/ ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [PATCHES] [HACKERS] 8.2 features?
Joe Conway [EMAIL PROTECTED] writes: I'm liking this too. But when you say jointree node, are you saying to model the new node type after NestLoop/MergeJoin/HashJoin nodes? These are referred to as join nodes in ExecInitNode. Or as you mentioned a couple of times, should this look more like an Append node? No, I guess I confused you by talking about the executor representation at the same time. This is really unrelated to the executor. The join tree I'm thinking of here is the data structure that dangles off Query.jointree --- it's a representation of the query's FROM clause, and (at present) can contain RangeTblRef, FromExpr, and JoinExpr nodes. See the last hundred or so lines of primnodes.h for some details. The jointree is used by the planner to compute the plan node tree that the executor will run, but it's not the same thing. There are basically two ways you could go about this: 1. Make a new jointree leaf node type to represent a VALUES construct, and dangle the list of lists of expressions off that. 2. Make a new RangeTblEntry type to represent a VALUES construct, and just put a RangeTblRef to it into the jointree. The expressions dangle off the RangeTblEntry. Offhand I'm not certain which of these would be cleanest. The second way has some similarities to the way we handle set operation trees (UNION et al), so it might be worth looking at that stuff. However, being a RangeTblEntry has a lot of baggage (eg, various routines expect to find an RTE alias, column names, column types, etc) and maybe we don't need all that for VALUES. One advantage of the first way is that you could use the same node type for the raw parser output delivered by gram.y. This is a bit of a type cheat, because raw parser output is logically distinct from what parse analysis produces, but we do it in lots of other places too (JoinExpr for instance is used that way). You should in any case have a clear idea of the difference between the raw and analyzed parser representations --- for instance, the raw form won't contain any datatype info, whereas the analyzed form must. This might or might not need to be visible directly in the VALUES node --- it might be that you can rely on the datatype info embedded in the analyzed expressions. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] plPHP and plRuby
Peter Eisentraut wrote: Hannu Krosing wrote: So we would have src/pl/plphp/README.TXT src/pl/pljava/README.TXT src/pl/plj/README.TXT and anybody looking for pl-s would find the info in a logical place It could be interesting to have something like this: ./configure --with-plruby and it would actually fetch the latest plruby sources from the net and build. Ala Ports. This would take some organization of course, but it would be an relatively easy way to increase our core base without bloating core. Joshua D. Drake -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/ ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Units in postgresql.conf
On Thu, 20 Jul 2006, Josh Berkus wrote: Peter, I don't understand how that is related. Or what a conversion utility would be for that matter. Well, the main issue with changing the units of the PostgreSQL.conf file from a user perspective is that the numbers from you 8.0/8.1 conf file would no longer work. A little conversion utilitily to turn your 8.0 file into an 8.2 file would help solve that. Josh, I would imagine that Peter intends to handle backward compatibility by processing values without explicit units in the units assumed pre 8.2. Thanks, Gavin ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] [COMMITTERS] pgsql: As a stopgap to get the Windows buildfarm members running again,
Tom-san. A...Sorry. It looked for the optimal result. === All 100 tests passed. === However, A result tells a lie. So, pursuit continues tracking. $ diff -w ./expected/char.out ./results/char.out 66d65 diff -w ./expected/char_1.out ./results/char.out Then, It seems that it is now in good order. Sorry, I was distracted for a while Thanks. Regards, Hiroshi Saito ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] [COMMITTERS] pgsql: As a stopgap to get the Windows buildfarm members running again,
Hi Tom-san. From: Tom Lane [EMAIL PROTECTED] Log Message: --- As a stopgap to get the Windows buildfarm members running again, hot-wire the check on diff's exit status to check for literally 0 or 1. Someone should look into why WIFEXITED/WEXITSTATUS don't work for this, but I've spent more than enough time on it already. Modified Files: -- pgsql/src/test/regress: pg_regress.c (r1.9 - r1.10) (http://developer.postgresql.org/cvsweb.cgi/pgsql/src/test/regress/pg_regress.c.diff?r1=1.9r2=1.10) Hm, Surely, although it is forcing, it removes one problem. schedule=./parallel_schedule --multibyte=SQL_ASCII --load-language=plpgsql == creating temporary installation== == initializing database system == == starting postmaster== running on port 55432 with pid 1964 == creating database regression == CREATE DATABASE ALTER DATABASE == installing plpgsql == CREATE LANGUAGE == running regression test queries== parallel group (13 tests): text oid varchar char name float4 int2 boolean int8 int4 float8 bit numeric boolean ... ok char ... ok name ... ok varchar ... ok text ... ok int2 ... ok (snip)... truncate ... ok alter_table ... ok sequence ... ok polymorphism ... ok rowtypes ... ok test stats... ok test tablespace ... ok == shutting down postmaster == server stopped === All 100 tests passed. === However, A result tells a lie. So, pursuit continues tracking. $ diff -w ./expected/char.out ./results/char.out 66d65 | A 71c70 (5 rows) --- (4 rows) 79d77 | A 84c82 (6 rows) --- (5 rows) 90a89 | A 92c91 (1 row) --- (2 rows) 99a99 | A 101c101 (2 rows) --- (3 rows) Ugaa... this is strange Regards, Hiroshi Saito Regards, Hiroshi Saito ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [PATCHES] [HACKERS] 8.2 features?
Tom Lane wrote: No, I guess I confused you by talking about the executor representation at the same time. This is really unrelated to the executor. The join tree I'm thinking of here is the data structure that dangles off Query.jointree --- it's a representation of the query's FROM clause, and (at present) can contain RangeTblRef, FromExpr, and JoinExpr nodes. See the last hundred or so lines of primnodes.h for some details. The jointree is used by the planner to compute the plan node tree that the executor will run, but it's not the same thing. Ah, that helps. Thanks for the explanation. I'll start digging in again... Thanks, Joe ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] [PATCHES] modular pg_regress.sh
Joachim Wieland [EMAIL PROTECTED] writes: I thought however that it would be nice to offer a kind of regression framework, that lets you easily parse command line options, create a temp environment (if desired), initialize the server with databases, roles, languages, start up the server, clean up everything afterwards and so on. Given that pg_regress is now in C (and only a few minutes ago did I stop entertaining the idea of reverting that patch ... man, the Windows port was painful), the way this would need to work is to split it into a library and main program. Which isn't unreasonable if you feel like doing the work. I'd suggest that at that point, pg_regress is no longer some random thingie under src/test/regress, but would need to be promoted to have its own source directory, perhaps under src/bin. Also, we've speculated endlessly about developing a more general testing framework that could for instance support serious testing of concurrent behavior. Maybe here is a good place to start making that happen. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq