Re: [GENERAL] [pgsql-advocacy] Documentation quality WAS: interesting PHP/MySQL thread
Nolan, And my pet peeve of the month is software source distributions that include the documentation ONLY in HTML, which is OK IF you have Apache running on the system you're building the sources on and are willing to make the documentation directory available to Apache, but otherwise they're very hard to use. ??? You can look at an HTML file directy with any browser. If you're SSH-ing in to a remote system, use Lynx. Though I agree that providing both man and html would be nicer. And while i'm on the subject, the only book (hard copy) I've got on PostgreSQL is the O'Reilly 'Practical PostgreSQL' book, now a bit dated, which has one of the worst indexes I've seen in a computer manual in years. It may be the worst index I've ever experienced in an O'Reilly book. O'Reilly seems to be pretty hit-and-miss on this account. The Perl books are well-indexed, but SQL in a Nutshell has *no* index, perhaps because O'Reilly thought (wrongly) that it didn't need one because of the dictionary-like format. The O'Reilly label is not a guarentee of quality, just a general indicator. I know that indexes are the last thing authors want to do (both literally and figuratively), but a good index makes the rest of the book much better. Authors seldom do the indexes themselves, as indexing is a black art known to few (and I have yet to see a really good index prepared by the author -- sorry, Bruce) Most frequently, the publisher hires a professional indexer and takes the cost out of the author's advance. When you find a really good index, you know that either: a) the author really cares about indexes; b) the publisher offered to pay for or split the cost of indexing, or at least made it a requirement of the book contract. Obviously, the publisher can really influence things through (b), so if I find a badly indexed book (and in my estimate 70% of tech books are badly indexed) I blame the publisher first. -- Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
RE : [GENERAL] [pgsql-advocacy] interesting PHP/MySQL thread
Hello, And, to avoid the connotation of bias, whomever writes such a migration tutorial might want to suggest using the PEAR:DB abstraction layer to avoid migration hassles in the future. http://pear.php.net/ I don't like very much PEAR::DB since they have a HUGE lack in the errors messages accuracy... I've lost time due to an Unknown error displayed by PEAR::DB which was in fact a Permission Denied from PostgreSQL... :-/ I've already tell them about this problem, but they seemed to don't care about that. I'm waiting PHP5 (which should have a better object model with the possibility to throws some exceptions) and newer PEAR::DB that uses the PHP5 possibilities. So, I will still use the pg_ functions for several years again before having a new look on that ! :-) Cheers, --- Bruno BAGUETTE - [EMAIL PROTECTED] ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] different datatypes in index scan join
On Monday 23 Jun 2003 10:06 pm, Jonathan Bartlett wrote: Is there a link to some of these discussions? Go to http://archives.postgresql.org/ and search for coercion or implicit coercion or implicit cast or similar on the hackers list. -- Richard Huxton ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Eliminating start error message: unary operator
On Mon, 23 Jun 2003 15:16:28 -0400 Carlos [EMAIL PROTECTED] wrote: start: Executing /etc/rc.d/init.d/postgresql start .. Starting postgresql service: -sh: [: ==: unary operator expected [ OK ] do: ls -l /bin/sh it is supposed to be a symlink pointing fo /bin/bash if it's not, check if you do have /bin/bash and do: 1. change symlink /bin/sh to point to /bin/bash or 2. change startup script to use bash instead. that should help. depesz -- ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] [pgsql-advocacy] Documentation quality WAS: interesting PHP/MySQL thread
??? You can look at an HTML file directy with any browser. If you're SSH-ing in to a remote system, use Lynx. Though I agree that providing both man and html would be nicer. Try accessing a HTML file on a Linux system from a PC-based browser. Unless you have some kind of file sharing software running, which I generally don't because the only times I've ever been hacked into they got in through file sharing ports, you can't get there from here. O'Reilly seems to be pretty hit-and-miss on this account. The Perl books are well-indexed, but SQL in a Nutshell has *no* index, perhaps because O'Reilly thought (wrongly) that it didn't need one because of the dictionary-like format. The O'Reilly label is not a guarentee of quality, just a general indicator. I think the 'Nutshell' books are a different breed of cat, none of them have ever had indexes worth mentioning. Authors seldom do the indexes themselves, as indexing is a black art known to few (and I have yet to see a really good index prepared by the author -- I've been somewhat involved in three book projects (two textbooks and one rule book), in all three case the authors did their own index. Maybe I've just had a good run of luck on the O'Reilly books I've bought, or maybe I haven't bought as many of them in the last three or four years as I used to. -- Mike Nolan ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] [pgsql-advocacy] interesting PHP/MySQL thread
Dennis Gearon wrote: so, if Postgres were to have a manual like PHP's OLD manual(more next), that would be a worthwhile contribution? the new manuals seems to be drifting to using only GOOGLE listings. MUCH less information on one page, not nearly as good search results as the old one. I don't know why they are switching. If google is going to do web searches for technical sites, it nees to change the format. I think they are having performance problems and they are using google to shift the load... ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Many Pl/PgSQL parameters - AllocSetAlloc(128)?
(cross-posting to HACKERS) Reuven M. Lerner wrote: I'm creating a new OpenACS package that uses PostgreSQL, and in doing so have encountered what seems to be a problem in PostgreSQL. [...snip...] CREATE OR REPLACE FUNCTION add_news__test (integer,varchar,timestamptz,varchar,varchar,varchar, varchar,integer,timestamptz,integer,timestamptz,varchar,varchar, varchar,integer,boolean, varchar, varchar, varchar, timestamptz, integer, varchar, integer, integer) [...snip...] p_last_mod_date alias for $20; -- default null/timestamptz p_modified_by alias for $21; -- default null/integer p_last_mod_date alias for $20; -- default null/timestamptz p_modified_by alias for $21; -- default null/integer ^ above two lines repeated p_image_filename alias for $22; -- default null/text p_headline_page alias for $23; -- default null/integer more importantly, you call the function (below) with a varchar here, not integer p_headline_position alias for $24; -- default null/integer [...snip...] now(), -- p_last_mod_date ^^^ try 'now'::timestamptz '298'::integer,-- p_modified_by 'image.jpeg'::varchar, -- p_image_filename 'Category page'::varchar, -- p_headline_page ^^^ this one should be an integer '1'::integer -- p_headline_position ); You found a real bug, I can confirm it on CVS tip. However your workaround is to call the function *exactly* as declared. Otherwise in parse_func.c:gen_cross_product() the following code is executed: snippet nanswers = 1; for (i = 0; i nargs; i++) { nanswers *= (arginh[i].nsupers + 2); cur[i] = 0; } iter = result = (Oid **) palloc(sizeof(Oid *) * nanswers); /snippet I get nanswers = 16777216, so right off the bat 67MB or so is allocated. Then there's this: snippet /* compute the cross product from right to left */ for (;;) { oneres = (Oid *) palloc0(FUNC_MAX_ARGS * sizeof(Oid)); /snippet I'm guessing this gets executed nanswers times. I saw memory usage grow to 880 MB and then killed the process. I'm not sure of the best way to fix this yet, but I found that when calling the function with argument types matching the prototype perfectly, this code never gets executed. HTH, Joe p.s. here's a backtrace: #0 AllocSetAlloc (context=0x830a624, size=128) at aset.c:731 #1 0x081bcb14 in MemoryContextAllocZero (context=0x830a624, size=128) at mcxt.c:505 #2 0x080c5c03 in gen_cross_product (arginh=0xbfffd120, nargs=24) at parse_func.c:1094 #3 0x080c59b6 in argtype_inherit (nargs=24, argtypes=0xbfffd350) at parse_func.c:975 #4 0x080c5836 in func_get_detail (funcname=0x831451c, fargs=0x83178e8, nargs=24, argtypes=0xbfffd350, funcid=0xbfffd33c, rettype=0xbfffd340, retset=0xbfffd347 \bÁ\002, true_typeids=0xbfffd348) at parse_func.c:891 #5 0x080c4c4c in ParseFuncOrColumn (pstate=0x8317810, funcname=0x831451c, fargs=0x83178e8, agg_star=0 '\0', agg_distinct=0 '\0', is_column=0 '\0') at parse_func.c:241 #6 0x080c41de in transformExpr (pstate=0x8317810, expr=0x8317714) at parse_expr.c:399 #7 0x080cb4ed in transformTargetEntry (pstate=0x8317810, node=0x8317714, expr=0x0, colname=0x0, resjunk=0 '\0') at parse_target.c:60 #8 0x080cb53b in transformTargetList (pstate=0x8317810, targetlist=0x831774c) at parse_target.c:193 #9 0x080b61c8 in transformSelectStmt (pstate=0x8317810, stmt=0x8317768) at analyze.c:1771 #10 0x080b41b7 in transformStmt (pstate=0x8317810, parseTree=0x8317768, extras_before=0xbfffd574, extras_after=0xbfffd578) at analyze.c:407 #11 0x080b402b in do_parse_analyze (parseTree=0x8317768, pstate=0x8317810) at analyze.c:234 #12 0x080b3f44 in parse_analyze (parseTree=0x8317768, paramTypes=0x830a624, numParams=137405988) at analyze.c:159 #13 0x08159c3c in pg_analyze_and_rewrite (parsetree=0x8317768, paramTypes=0x0, numParams=0) at postgres.c:482 #14 0x08159f83 in exec_simple_query ( query_string=0x8313c40 select add_news__test(\n 1000::integer,, ' ' repeats 15 times, \n'en_US'::varchar,, ' ' repeats 15 times, \n'2003-6-23'::timestamptz, \n'text text text'::varchar, \n'language'::varchar, ...) at postgres.c:795 #15 0x0815bd1b in PostgresMain (argc=4, argv=0x829aa9c, username=0x829aa64 postgres) at postgres.c:2753 #16 0x0813a531 in BackendFork (port=0x82a80c0) at postmaster.c:2471 #17 0x0813a026 in BackendStartup (port=0x82a80c0) at postmaster.c:2118 #18 0x08138b5f in ServerLoop () at postmaster.c:1090 #19 0x081384dd in PostmasterMain (argc=5, argv=0x829a4c8) at postmaster.c:872 #20 0x0810f713 in main (argc=5, argv=0xbfffe334) at main.c:211 #21 0x420156a4 in __libc_start_main () from /lib/tls/libc.so.6 ---(end of
Re: [GENERAL] Many Pl/PgSQL parameters - AllocSetAlloc(128)?
Excellent -- thanks so much for your help. I just tried the function with the right arguments, and it worked just fine. Yet more proof of named parameters being a good thing... Reuven ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Many Pl/PgSQL parameters - AllocSetAlloc(128)?
Joe Conway wrote: I get nanswers = 16777216, so right off the bat 67MB or so is allocated. Then there's this: snippet /* compute the cross product from right to left */ for (;;) { oneres = (Oid *) palloc0(FUNC_MAX_ARGS * sizeof(Oid)); /snippet I'm guessing this gets executed nanswers times. I saw memory usage grow to 880 MB and then killed the process. I'm not sure of the best way to fix this yet, but I found that when calling the function with argument types matching the prototype perfectly, this code never gets executed. Actually, adding a pfree(oneres); to the end of that for loop plugs the memory leak and allows me to see the error message: ERROR: Function add_news__test(integer, character varying, timestamp with time zone, character varying, character varying, character varying, character varying, integer, timestamp with time zone, integer, timestamp with time zone, character varying, character varying, character varying, integer, boolean, character varying, character varying, character varying, timestamp with time zone, integer, character varying, character varying, integer) does not exist Unable to identify a function that satisfies the given argument types You may need to add explicit typecasts Takes a while to check all 16777216 possibilities though, so I'm still not sure more isn't needed here. Joe ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] Selecting dupes from table
Hello , I have table directory with 3 columns (id,url,title) I want to list all entries with duplicate urls. I tried this: select id,url,title from directory where url IN (select url from directory group by url having count(url) 1) ORDER by url; but this takes 30 seconds with 25.000 entries. I have index on url. Can I use any other query to select this faster. -- Best regards, Uros mailto:[EMAIL PROTECTED] ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] Selecting dupes from table
Uros wrote: (B (B I want to list all entries with duplicate urls. (B (B I tried this: (B (B select id,url,title from directory where url IN (B (select url from directory group by url having count(url) 1) (B ORDER by url; (B (BTry: (B (Bselect id,url,title from directory where group by id, url, title having (Bcount(url) 1 order by url; (B (BThink it should work, (B (BJan-Christian Imbeault (B (B (B (B---(end of broadcast)--- (BTIP 9: the planner will ignore your desire to choose an index scan if your (B joining column's datatypes do not match
Re: [GENERAL] Selecting dupes from table
On Tue, Jun 24, 2003 at 12:16:43PM +0200, Uros wrote: Hello , I have table directory with 3 columns (id,url,title) I want to list all entries with duplicate urls. I tried this: select id,url,title from directory where url IN (select url from directory group by url having count(url) 1) ORDER by url; but this takes 30 seconds with 25.000 entries. I have index on url. Can I use any other query to select this faster. How about: Duplicate urls would be given by: select url from directory group by url having count(*) 1; To get all the entries with those urls, something like: select id,url,title from directory, (select url from directory group by url having count(*) 1) as list where list.url = directory.url; I hope I got the syntax right. -- Martijn van Oosterhout [EMAIL PROTECTED] http://svana.org/kleptog/ the West won the world not by the superiority of its ideas or values or religion but rather by its superiority in applying organized violence. Westerners often forget this fact, non-Westerners never do. - Samuel P. Huntington pgp0.pgp Description: PGP signature
Re: [GENERAL] [pgsql-advocacy] interesting PHP/MySQL thread
Josh Berkus wrote: Matt, The quality of the material is very good, so please don't get me wrong, I just think it's hard to find stuff. Both PHP and MySQL have well laid out docs, with PHP being the better of the two. I certainly agree ... one of my goals (shared with some other people) is to eventually migrate all of the *accessory* documentation (techdocs, etc.) to a searchable system that's easy for non-programmers to contribute to and edit (i.e. SGML and CVS not required). Yep. The present Techdocs site is kind of unmaintained, and the Plone area isn't being worked on either presently (lack of time). Finally got Bricolage installed on a system here at work to play around with. Reckon Josh'll be interested in that... :-) Regards and best wishes, Justin Clift Item #87 on Josh's ToDo list ... ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Selecting dupes from table
Hello Martijn, Tuesday, June 24, 2003, 12:32:53 PM, you wrote: MvO On Tue, Jun 24, 2003 at 12:16:43PM +0200, Uros wrote: MvO How about: MvO Duplicate urls would be given by: MvO select url from directory group by url having count(*) 1; MvO To get all the entries with those urls, something like: MvO select id,url,title from directory, MvO (select url from directory group by url having count(*) 1) as list MvO where list.url = directory.url; MvO I hope I got the syntax right. I tried that before but got error: ERROR: Column reference url is ambiguous -- Best regards, Urosmailto:[EMAIL PROTECTED] ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Selecting dupes from table
On Tue, Jun 24, 2003 at 01:12:05PM +0200, Uros wrote: Hello Martijn, MvO select id,url,title from directory, MvO (select url from directory group by url having count(*) 1) as list MvO where list.url = directory.url; MvO I hope I got the syntax right. I tried that before but got error: ERROR: Column reference url is ambiguous Oh right, try: select id,directory.url,title from directory, (select url from directory group by url having count(*) 1) as list where list.url = directory.url; -- Martijn van Oosterhout [EMAIL PROTECTED] http://svana.org/kleptog/ the West won the world not by the superiority of its ideas or values or religion but rather by its superiority in applying organized violence. Westerners often forget this fact, non-Westerners never do. - Samuel P. Huntington pgp0.pgp Description: PGP signature
Re: [GENERAL] Selecting dupes from table
Hi, I think this could be good : select d1.id, d1.url, d1.tittle from directory d1 where exists (select url from directory d2 where d1.url=d2.url and d2.oidd1.oid) order by d1.url; Hope this will help, Regards, Le Mardi 24 Juin 2003 12:16, Uros a écrit : Hello , I have table directory with 3 columns (id,url,title) I want to list all entries with duplicate urls. I tried this: select id,url,title from directory where url IN (select url from directory group by url having count(url) 1) ORDER by url; but this takes 30 seconds with 25.000 entries. I have index on url. Can I use any other query to select this faster. -- Hervé Piedvache Elma Ingénierie Informatique 6 rue du Faubourg Saint-Honoré F-75008 - Paris - France Pho. 33-144949901 Fax. 33-144949902 ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Selecting dupes from table
Hello Hervé, Thanks a lot for help. Both Martijn van Oosterhout and yours do job god. Execution time is now about 1 to 2 seconds. I also figured myself what i was doing wrong with join and error i got. If enybody need this here is both three solutions. select distinct d1.id,d1.url,d1.title from directory d1 inner join directory d2 on d2.url = d1.url where d1.id d2.id ORDER by d1.url; select id,directory.url,title from directory, (select url from directory group by url having count(*) 1) as list where list.url = directory.url; select d1.id, d1.url, d1.tittle from directory d1 where exists (select url from directory d2 where d1.url=d2.url and d2.oidd1.oid) order by d1.url; -- Best regards, Urosmailto:[EMAIL PROTECTED] Tuesday, June 24, 2003, 1:27:44 PM, you wrote: HP Hi, HP I think this could be good : HP select d1.id, d1.url, d1.tittle HPfrom directory d1 HP where exists (select url from directory d2 where d1.url=d2.url and d2.oidd1.oid) HP order by d1.url; HP Hope this will help, HP Regards, HP Le Mardi 24 Juin 2003 12:16, Uros a écrit : Hello , I have table directory with 3 columns (id,url,title) I want to list all entries with duplicate urls. I tried this: select id,url,title from directory where url IN (select url from directory group by url having count(url) 1) ORDER by url; but this takes 30 seconds with 25.000 entries. I have index on url. Can I use any other query to select this faster. ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] tsearch: immutable functions?
Read thread: http://fts.postgresql.org/db/msg.html?mid=1359513 Jochem van Dieten wrote: I was wondering if there is any reason not to define the functions from tsearch as immutable. What I am trying to achieve is that I don't have to create a separate field to index as is explained in the manual, but just create a functional gist index. Instead of: alter table titles add titleidx txtidx; update titles set titleidx=txt2txtidx(title); create index t_idx on titles using gist(titleidx); just: create index t_idx on titles using gist(txt2txtidx(title)); But creating a functional index requires an immutable function. Jochem ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match -- Teodor Sigaev E-mail: [EMAIL PROTECTED] ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Eliminating start error message: unary operator
Hi Tom, This log has the error but I do not understand why it occurs. Attached is the file also + PGVERSION=7.3 + INITD=/etc/rc.d/init.d + . /etc/rc.d/init.d/functions ++ TEXTDOMAIN=initscripts ++ TEXTDOMAINDIR=/etc/locale ++ umask 022 ++ export PATH=/sbin:/usr/sbin:/bin:/usr/bin:/usr/X11R6/bin ++ PATH=/sbin:/usr/sbin:/bin:/usr/bin:/usr/X11R6/bin ++ '[' -z '' ']' ++ COLUMNS=80 ++ '[' -f /etc/sysconfig/i18n -a -z '' ']' ++ . /etc/sysconfig/i18n +++ LANG=en_US.iso885915 +++ SUPPORTED=en_US.iso885915:en_US:en +++ SYSFONT=lat0-sun16 +++ SYSFONTACM=iso15 +++ /sbin/consoletype ++ '[' en_US.iso885915 = ja_JP.eucJP -a pty '!=' pty ']' +++ /sbin/consoletype ++ '[' en_US.iso885915 = ko_KR.eucKR -a pty '!=' pty ']' +++ /sbin/consoletype ++ '[' en_US.iso885915 = zh_CN.GB2312 -a pty '!=' pty ']' +++ /sbin/consoletype ++ '[' en_US.iso885915 = zh_TW.Big5 -a pty '!=' pty ']' ++ export LANG ++ '[' -z '' ']' ++ '[' -f /etc/sysconfig/init ']' ++ . /etc/sysconfig/init +++ BOOTUP=color +++ RES_COL=60 +++ MOVE_TO_COL=echo -en \033[60G +++ SETCOLOR_SUCCESS=echo -en \033[1;32m +++ SETCOLOR_FAILURE=echo -en \033[1;31m +++ SETCOLOR_WARNING=echo -en \033[1;33m +++ SETCOLOR_NORMAL=echo -en \033[0;39m +++ LOGLEVEL=3 +++ PROMPT=yes ++ '[' -x /sbin/consoletype ']' +++ consoletype ++ '[' pty = serial ']' ++ '[' color '!=' verbose ']' ++ INITLOG_ARGS=-q ++ typeset -F + TYPESET=declare -f action declare -f checkpid declare -f confirm declare -f daemon declare -f echo_failure declare -f echo_passed declare -f echo_success declare -f echo_warning declare -f failure declare -f killproc declare -f passed declare -f pidfileofproc declare -f pidofproc declare -f status declare -f strstr declare -f success declare -f warning + . /etc/sysconfig/network ++ NETWORKING=yes ++ HOSTNAME=buyemr.pbsinet.com ++ GATEWAY=209.4.117.159 ++ basename /etc/rc.d/init.d/postgresql + NAME=postgresql + PGPORT=5432 + export PGDATA=/var/lib/pgsql + PGDATA=/var/lib/pgsql + '[' -f /var/lib/pgsql/PG_VERSION ']' + export PGDATA=/var/lib/pgsql/data + PGDATA=/var/lib/pgsql/data + '[' -f /etc/sysconfig/pgsql/postgresql ']' + export PGDATA + export PGPORT + export PGOPTS + '[' yes = no ']' + '[' -f /usr/bin/postmaster ']' + start + PSQL_START=Starting postgresql service: + '[' -f /var/lib/pgsql/data/PG_VERSION ']' + '[' -d /var/lib/pgsql/data/base ']' ++ cat /var/lib/pgsql/data/PG_VERSION + '[' 7.3 '!=' 7.3 ']' ++ pidof -s /usr/bin/postmaster + pid= + '[' ']' + rm -f /tmp/.s.PGSQL.5432 + echo -n 'Starting postgresql service: ' Starting postgresql service: + su -l postgres -s /bin/sh -c '/usr/bin/pg_ctl -D /var/lib/pgsql/data -p /usr/bin/postmaster -o '\''-p 5432'\'' start /dev/null 21' -sh: [: ==: unary operator expected + sleep 1 ++ pidof -s /usr/bin/postmaster + pid=6350 + '[' 6350 ']' + echo 'declare -f action declare -f checkpid declare -f confirm declare -f daemon declare -f echo_failure declare -f echo_passed declare -f echo_success declare -f echo_warning declare -f failure declare -f killproc declare -f passed declare -f pidfileofproc declare -f pidofproc declare -f status declare -f strstr declare -f success declare -f warning' + grep 'declare -f success' + success 'Starting postgresql service: ' + '[' -z '' ']' + initlog -q -n /etc/rc.d/init.d/postgresql -s 'Starting postgresql service: ' -e 1 + '[' color '!=' verbose -a -z '' ']' + echo_success + '[' color = color ']' + echo -en '\033[60G' [60G+ echo -n '[ ' [ + '[' color = color ']' + echo -en '\033[1;32m' [1;32m+ echo -n OK OK+ '[' color = color ']' + echo -en '\033[0;39m' [0;39m+ echo -n ' ]' ]+ echo -ne '\r' + return 0 + return 0 + touch /var/lock/subsys/postgresql + echo 6350 + echo + exit 0 -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Tom Lane Sent: Monday, June 23, 2003 5:18 PM To: Carlos Cc: [EMAIL PROTECTED]; Dain Subject: Re: [GENERAL] Eliminating start error message: unary operator Carlos Oliva [EMAIL PROTECTED] writes: I got this in standard out. I also included the output in a text file I don't see the complaint anywhere in there, though? Also, it looks like this failed because postmaster was already running. You probably need to stop the postmaster and then try the sh -x. Don't forget to pipe both stdout and stderr into the same file, eg sh -x /etc/rc.d/init.d/postgresql start mylog 21 regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly + PGVERSION=7.3 + INITD=/etc/rc.d/init.d + . /etc/rc.d/init.d/functions ++ TEXTDOMAIN=initscripts ++ TEXTDOMAINDIR=/etc/locale ++ umask 022 ++ export PATH=/sbin:/usr/sbin:/bin:/usr/bin:/usr/X11R6/bin ++ PATH=/sbin:/usr/sbin:/bin:/usr/bin:/usr/X11R6/bin ++ '[' -z '' ']' ++ COLUMNS=80 ++ '[' -f /etc/sysconfig/i18n -a -z '' ']' ++ .
[GENERAL] bytea char escaping
Hi, What chars must be escaped in string and how exactly? LF - \\n CR - \\r ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] [pgsql-advocacy] interesting PHP/MySQL thread
I'm a Postgres and PHP newbie. I'm having a great deal of success with my latest development effort having moved most of the logic from a perl/php logic 'core' to postgres using plpgsql functions. (Thanks for all that help, Josh). I have a few comments to make on the idea of introducing people, PHP developers especially, to postgresql. I'm not commenting here on how easy it is to use PHP with postgres (it was transparent for me using Debian) or whether or not to advocate the use of advanced features to general users. Rather, it appears to me, that the PHP/Postgres documentation and feature set should be improved. 1) PHP Documentation The postgresql write up in the PHP html documentation doesn't give a very good picture of the capabilities of postgres. While the PHP docs aren't obviously a good place to write up the benefits of plpgsql functions, some mention should be made to help differentiate between the capabilities of MySQL and Postgres. PHP documents: ref.pgsql.html; ref.mysql.html The MySQL examples given for database specific functions are useful and to the point. The page on most of the Postgres functions are sketchy. (No error number in Postgres...) PHP documents: function.mysql-errno.html; function.pg-result-error.html PHP/Postgres provides a set of predefined constants, eg PGSQL_COMMAND_OK and PGSQL_FATAL_ERROR. The use and parameters of these constants is not described. The latter appears to provide inconsistent results under my PHP 4.2.3 install. 2) PHP-Postgres bugs Apart from the PGSQL_FATAL_ERROR problem above, it would be good to find a more simple, PHP-like, approach to catch exceptions and the like. At the moment I believe one has to do something like: function test () { $sql = SELECT count(n_id) as number FROM people ; ob_start(); $result = pg_exec ($this-conn, $sql); $this-status = pg_result_status($result); ob_end_clean(); $this-result_checker(); if ($this-error != 0) { echo An error occured.\n; exit; } ... return $this; } function result_checker () { // horrible code to check for postgres exceptions // status numbers sometimes show up // ghosts of PGSQL_FATAL_ERROR? if (! isset($this-status) or ($this-status == 5 or $this-status == 7)) { $this-error = 1; // wierdly, this always works $this-error_msg = pg_last_error($this-conn); return 1; } else { return 0; } } On 22/06/03, Bruce Momjian ([EMAIL PROTECTED]) wrote: We need to use this opportunity to encourage PHP folks to switch to PostgreSQL. -- Rory Campbell-Lange [EMAIL PROTECTED] www.campbell-lange.net ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] [pgsql-advocacy] Documentation quality WAS: interesting
[EMAIL PROTECTED] wrote: ??? You can look at an HTML file directy with any browser. If you're SSH-ing in to a remote system, use Lynx. Though I agree that providing both man and html would be nicer. Try accessing a HTML file on a Linux system from a PC-based browser. Unless you have some kind of file sharing software running, which I generally don't because the only times I've ever been hacked into they got in through file sharing ports, you can't get there from here. If you work on Unix systems remotely on a regular base, you should have a Unix system as a workstation too. That way you can use ssh(1) to forward your X11 connections through a secure channel. A second PC can be implemented as a memory+disk upgrade together with a VMware license. Jan -- #==# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #== [EMAIL PROTECTED] # ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] [pgsql-advocacy] Documentation quality WAS: interesting
Jan Wieck wrote: If you work on Unix systems remotely on a regular base, you should have a Unix system as a workstation too. That way you can use ssh(1) to forward your X11 connections through a secure channel. A second PC can be implemented as a memory+disk upgrade together with a VMware license. There also ssh clients which support X11 forwarding on a windows machine and since there are X11 servers for windows... You don't necessarily need a unix workstation. Apart from that, a (tight)vnc server might be less bandwidth consuming. Arjen ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] [pgsql-advocacy] interesting PHP/MySQL thread
[EMAIL PROTECTED] writes: And while i'm on the subject, the only book (hard copy) I've got on PostgreSQL is the O'Reilly 'Practical PostgreSQL' book, now a bit dated, which has one of the worst indexes I've seen in a computer manual in years. It may be the worst index I've ever experienced in an O'Reilly book. I've had a series of paper clips, bulldog clips and post-it notes marking the sections I tend to reuse frequently, because the index doesn't get you there. Most of the time I use the online manual, and I've got a few pages that aren't obvious from the table of contents bookmarked for the online manual, too. The online manuals have an index. Could you write up a list of proposed index additions for us? A few quick indexentry commands would be easy enough to add to the doc sources --- the hard part is knowing what to index. regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] Message at the console
I have created a table using the following command CREATE TABLE metricsreporttable(id VARCHAR(150) NOT NULL,sitename VARCHAR(155),title VARCHAR(160),type VARCHAR(165),count INTEGER,categoryid VARCHAR(170)); When I start inserting rows into this table I get the following error message in the console. "Attribute metricsreporttable.oid must be GROUPed or used in an aggregate function" What mightbe the problem? Thanks and Regards, Kallol.
Re: [GENERAL] [pgsql-advocacy] Documentation quality WAS: interesting
Arjen van der Meijden wrote: Jan Wieck wrote: If you work on Unix systems remotely on a regular base, you should have a Unix system as a workstation too. That way you can use ssh(1) to forward your X11 connections through a secure channel. A second PC can be implemented as a memory+disk upgrade together with a VMware license. There also ssh clients which support X11 forwarding on a windows machine and since there are X11 servers for windows... You don't necessarily need a unix workstation. Apart from that, a (tight)vnc server might be less bandwidth consuming. There are all kinds of stuff that works. VPN's, VNC's, you name it. I just have the best experience with having a Unix workstation when administering/working on remote Unix systems. Plus, banning your workstation(s) into virtual machines has another, not so obvious advantage. A backup of the workstation not only get's reduce to copying the files that make up the virtual disk ... you can restore it onto different hardware without confusing the device manager or going through config hassles. Ever restored a Windows backup onto a replacement notebook? Don't risk that fun. Right now I have 1 Linux and 2 Win2K systems running inside of VMware on my notebook. With FreeBSD and Minix standing by. They are a happy little virtual network. But I think we're going a bit off topic here ... Jan -- #==# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #== [EMAIL PROTECTED] # ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Eliminating start error message: unary operator
Carlos Oliva [EMAIL PROTECTED] writes: This log has the error but I do not understand why it occurs. Starting postgresql service: + su -l postgres -s /bin/sh -c '/usr/bin/pg_ctl -D /var/lib/pgsql/data -p /usr/bin/postmaster -o '\''-p 5432'\'' start /dev/null 21' -sh: [: ==: unary operator expected Hm. Apparently the error is actually occuring inside the pg_ctl script. Try changing the above line in the initscript to do /bin/sh -c /bin/sh -x /usr/bin/pg_ctl ... so we can get tracing of the pg_ctl script too. (I am now thinking that the error probably occurs because pg_ctl is expecting some environment variable to be set that is not set when run from the boot script? If so, you might not see the failure if you try to run pg_ctl by hand with sh -x. But you could try that first if you like.) regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Fwd: Re: [GENERAL] Lotus Domino and PostgreSql in Linux
I'm working on pgSQL integration with Domino6 (DECS, LCLSX) (there will be documentation in the coming weeks). I have a question a couple of questions for you: 1) Are you using Domino 6 or 5.x 2) In your DSN setup on (NT?) do you have the valid account information to connect to Pg? 3) If you are running agent are you sure you have the security structure that will permit that? For instance, in Domino 6 DECS (unlike DCR's) require admin priviledge to set up (which probably in not the same as the designer privs). In addition, there is an option you set for the database to allow external connection. Once that is done you have to make sure any agents running against this database have the appropriate privs as well Hope this help a bit. I'm on the same journey too :) Quoting Kallol Nandi [EMAIL PROTECTED]: I am running an agent in the domino server that connects to a database in Postgresql through odbc dsn.Both are installed in the same Linux box. I am getting an error Error Creating product object at the line Set con = New ODBCConnection Here is the code : Option Public Uselsx *LSXODBC Sub Initialize Dim con As ODBCConnection Dim qry As ODBCQuery Dim result As ODBCResultSet Dim id As Integer Dim nam As String,job As String Am getting Error here Set con = New ODBCConnection Set qry = New ODBCQuery Set result = New ODBCResultSet Set qry.Connection = con Set result.Query = qry status = con.ConnectTo(debug) qry.SQL = select * from testtable result.Execute Do result.NextRow id = result.GetValue(a, id) nam = result.GetValue(b, nam) Loop Until result.IsEndOfData result.Close(DB_CLOSE) con.Disconnect End Sub I guess it is an error related to Domino. But not sure. may be related to the ODBC driver also. Is there any way to solve it? Regards, Kallol. This email account is being host by: VCSN, Inc : http://vcsn.com - End forwarded message - -- Keith C. Perry Director of Networks Applications VCSN, Inc. http://vcsn.com This email account is being host by: VCSN, Inc : http://vcsn.com ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] Error with word 'desc'
Hi, Seems that everytime I use the word 'desc' I got an error. For example, this query: SELECT code, desc FROM or_code_table WHERE tr = 'FORMAT' gave me ERROR: parser: parse error at or near desc When I created the table I got the same error too. I got around that by creating the table using pgaccess instead of using psql console. Is 'desc' a reserved key word of some kind ? Even if yes (like in mysql or oracle), the parser should understand the context in which it's mentioned. I don't remember I got that kind of error with mysql / oracle. Here is the table: lightcone=# \d or_code_table; Table or_code_table Column | Type | Modifiers --+--+--- code | character varying(2) | not null tr | character varying(10)| not null desc | character varying(100) | mod_user | character varying(15)| mod_time | timestamp with time zone | default now() Primary key: or_code_table_pkey Any help is greatly appreciated. Thanks. RDB -- Reuben D. Budiardja Department of Physics and Astronomy The University of Tennessee, Knoxville, TN - /\ ASCII Ribbon Campaign against HTML \ / email and proprietary format X attachments. / \ - Have you been used by Microsoft today? Choose your life. Choose freedom. Choose LINUX. - ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Error with word 'desc'
On Tue, 24 Jun 2003, Reuben D. Budiardja wrote: Hi, Seems that everytime I use the word 'desc' I got an error. For example, this query: SELECT code, desc FROM or_code_table WHERE tr = 'FORMAT' gave me ERROR: parser: parse error at or near desc When I created the table I got the same error too. I got around that by creating the table using pgaccess instead of using psql console. Is 'desc' a reserved key word of some kind ? Even if yes (like in mysql or oracle), the parser should understand the context in which it's mentioned. I don't remember I got that kind of error with mysql / oracle. Desc is a reserved word. You'll need to double quote it (desc) in the query for it to be legal. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] [pgsql-advocacy] Documentation quality WAS: interesting
The xserver in cygwin works just fine on all the systems I have tested, I have several linux boxen at home all headless, and I use Cygwin and XDMP to select which box I what to connect to and manage, seems as fast as using a local screen etc. Webmin is also a good tool, as it also has a POSTGRESQL managemnt module in it. - Original Message - From: Jan Wieck [EMAIL PROTECTED] To: Arjen van der Meijden [EMAIL PROTECTED] Cc: [EMAIL PROTECTED]; 'Advocacy PostgreSQL' [EMAIL PROTECTED]; 'PostgreSQL-general' [EMAIL PROTECTED] Sent: Tuesday, June 24, 2003 3:22 PM Subject: Re: [GENERAL] [pgsql-advocacy] Documentation quality WAS: interesting Arjen van der Meijden wrote: Jan Wieck wrote: If you work on Unix systems remotely on a regular base, you should have a Unix system as a workstation too. That way you can use ssh(1) to forward your X11 connections through a secure channel. A second PC can be implemented as a memory+disk upgrade together with a VMware license. There also ssh clients which support X11 forwarding on a windows machine and since there are X11 servers for windows... You don't necessarily need a unix workstation. Apart from that, a (tight)vnc server might be less bandwidth consuming. There are all kinds of stuff that works. VPN's, VNC's, you name it. I just have the best experience with having a Unix workstation when administering/working on remote Unix systems. Plus, banning your workstation(s) into virtual machines has another, not so obvious advantage. A backup of the workstation not only get's reduce to copying the files that make up the virtual disk ... you can restore it onto different hardware without confusing the device manager or going through config hassles. Ever restored a Windows backup onto a replacement notebook? Don't risk that fun. Right now I have 1 Linux and 2 Win2K systems running inside of VMware on my notebook. With FreeBSD and Minix standing by. They are a happy little virtual network. But I think we're going a bit off topic here ... Jan -- #==# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #== [EMAIL PROTECTED] # ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Eliminating start error message: unary operator
Hi Tom, Thank you very much for your help. In order to get the pg_ctl trace at start up, I would appreciate it if you could advise me on how to modify the postscrpt script. The line in question in the script is: su -l postgres -s /bin/sh -c /usr/bin/pg_ctl -D $PGDATA -p /usr/bin/postmaster -o '-p ${PGPORT}' start /dev/null 21 /dev/null Should it be something like: su -l postgres -s /bin/sh -c /bin/sh -x /usr/bin/pg_ctl -D $PGDATA -p /usr/bin/postmaster -o '-p ${PGPORT}' start mylog 21 /dev/null Thanks in advance for your response. -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Tom Lane Sent: Tuesday, June 24, 2003 10:32 AM To: Carlos Cc: [EMAIL PROTECTED]; Dain Subject: Re: [GENERAL] Eliminating start error message: unary operator Carlos Oliva [EMAIL PROTECTED] writes: This log has the error but I do not understand why it occurs. Starting postgresql service: + su -l postgres -s /bin/sh -c '/usr/bin/pg_ctl -D /var/lib/pgsql/data -p /usr/bin/postmaster -o '\''-p 5432'\'' start /dev/null 21' -sh: [: ==: unary operator expected Hm. Apparently the error is actually occuring inside the pg_ctl script. Try changing the above line in the initscript to do /bin/sh -c /bin/sh -x /usr/bin/pg_ctl ... so we can get tracing of the pg_ctl script too. (I am now thinking that the error probably occurs because pg_ctl is expecting some environment variable to be set that is not set when run from the boot script? If so, you might not see the failure if you try to run pg_ctl by hand with sh -x. But you could try that first if you like.) regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
warning: long, Re: [GENERAL] Database design problem: multilingual strings
Hi ! We had this problem in GnuMed (www.gnumed.org). Eventually, we decided that it is only really solvable automatically for fixed strings. That is, strings that are known at database creation. User supplied strings need user supplied translations as well. The translation mechanism works for them just as well but you depend on the user to supply a translation. I am attaching the solution we use in GnuMed. The schema file shows our table setup: --- -- = -- GnuMed fixed string internationalisation -- -- $Source: /cvsroot/gnumed/gnumed/gnumed/server/sql/gmI18N.sql,v $ -- $Id: gmI18N.sql,v 1.14 2003/06/10 09:58:11 ncq Exp $ -- license: GPL -- author: [EMAIL PROTECTED] -- = -- Import this script into any GnuMed database you create. -- This will allow for transparent translation of 'fixed' -- strings in the database. Simply switching the language in -- i18n_curr_lang will enable the user to see another language. -- For details please see the Developer's Guide. -- = -- force terminate + exit(3) on errors if non-interactive \set ON_ERROR_STOP 1 -- = create table i18n_curr_lang ( id serial primary key, owner name default CURRENT_USER unique not null, lang varchar(15) not null ); comment on table i18n_curr_lang is 'holds the currently selected language per user for fixed strings in the database'; -- = create table i18n_keys ( id serial primary key, orig text unique ); comment on table i18n_keys is 'this table holds all the original strings that need translation so give this to your language teams, the function i18n() will take care to enter relevant strings into this table, the table table does NOT play any role in runtime translation activity'; -- = create table i18n_translations ( id serial primary key, lang varchar(10), orig text, trans text, unique (lang, orig) ); create index idx_orig on i18n_translations(orig); -- = create function i18n(text) returns text as ' DECLARE original ALIAS FOR $1; BEGIN if not exists(select id from i18n_keys where orig = original) then insert into i18n_keys (orig) values (original); end if; return original; END; ' language 'plpgsql'; comment on function i18n(text) is 'insert original strings into i18n_keys for later translation'; -- = create function _(text) returns text as ' DECLARE orig_str ALIAS FOR $1; trans_str text; my_lang varchar(10); BEGIN -- no translation available at all ? if not exists(select orig from i18n_translations where orig = orig_str) then return orig_str; end if; -- get language select into my_lang lang from i18n_curr_lang where owner = CURRENT_USER; if not found then return orig_str; end if; -- get translation select into trans_str trans from i18n_translations where lang = my_lang and orig = orig_str; if not found then return orig_str; end if; return trans_str; END; ' language 'plpgsql'; comment on function _(text) is 'will return either the input or the translation if it exists'; -- = create function set_curr_lang(text) returns unknown as ' DECLARE language ALIAS FOR $1; BEGIN if exists(select id from i18n_translations where lang = language) then delete from i18n_curr_lang where owner = CURRENT_USER; insert into i18n_curr_lang (lang) values (language); delete from i18n_curr_lang where owner = (select trim(leading ''_'' from CURRENT_USER)); insert into i18n_curr_lang (lang, owner) values (language, (select trim(leading ''_'' from CURRENT_USER))); return 1; else raise exception ''Cannot set current language to [%]. No translations available.'', language; return NULL; end if; return NULL; END; ' language 'plpgsql'; comment on function set_curr_lang(text) is 'set preferred language: - for current user and _current_user - only if translations for this language are available'; -- = create function set_curr_lang(text, name) returns unknown as ' DECLARE language ALIAS FOR $1;
Re: [GENERAL] Database design problem: multilingual strings
At 19:15 24.06.2003, Antonios Christofides said: [snip] 'description' is no longer enough; it must be possible to add translations to _any_ language and to any number of languages. I've thought of a number of solutions, but none satisfies me to the point that I'd feel ready to die :-) I'd much appreciate comments/experience from anyone. I include the solutions I've thought of below, but you don't need to read them if you have a good pointer in hand. [snip] Taking off from this table: table cutlery_types id description 1 Spoon 2 Fork 3 Knife 4 Teaspoon you might use a table set like this: table lg_dependent oid_table | column | id_row | language | text --- # | desc | 1 | en | Spoon # | desc | 1 | ger | Löffel # | desc | 1 | fr | Cuilliere # | desc | 1 | el | Koutali Use a select statement like this: select t1.id, t2.text /*, etc */ from cutlery_description t1 join lg_dependent t2 on t2.oid_table = (select oid from pg_class where relname='cutlery_types') and t2.column='desc' and t2.language='en' It might be better to _not_ use the table oid to be more portable across databases, or pg_dumps without OID. You might either use the table name then, or have another table mapping table names to unique numbers. -- O Ernest E. Vogelsinger (\)ICQ #13394035 ^ http://www.vogelsinger.at/ ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Inheritance Indexes
On Tue, 24 Jun 2003, Alan Williams wrote: hs.exon.2= \d ga_psr_transcript_1 Table public.ga_psr_transcript_1 Column| Type | Modifiers --++--- id | integer| not null parent | integer| seqname | character varying(100) | not null source_type | smallint | not null feature_type | smallint | not null start| integer| not null stop | integer| not null strand | character(1) | not null annot_name | character varying(100) | not null depth| integer| not null Indexes: ga_psr_transcript_1_pkey primary key btree (id), ga_psr_transcript_1_start_stop btree (start, stop), ga_psr_transcript_1_stop btree (stop) Check constraints: aw_psr_transcript_1_strand (((strand = '+'::bpchar) OR (strand = '-'::bpchar)) OR (strand = '.'::bpchar)) Triggers: RI_ConstraintTrigger_1412526244, RI_ConstraintTrigger_1412526245 hs.exon.2= \d ga_psr_exon_1 Table public.ga_psr_exon_1 Column | Type | Modifiers ---++--- id| integer| not null parent| integer| seqname | character varying(100) | not null source_type | smallint | not null feature_type | smallint | not null start | integer| not null stop | integer| not null strand| character(1) | not null annot_name| character varying(100) | not null transcript_cluster_id | integer| not null depth | integer| not null Indexes: ga_psr_exon_1_pkey primary key btree (id), ga_psr_exon_1_parent btree (parent), ga_psr_exon_1_start_stop btree (start, stop), ga_psr_exon_1_stop btree (stop) Check constraints: aw_psr_exon_1_strand (((strand = '+'::bpchar) OR (strand = '-'::bpchar)) OR (strand = '.'::bpchar)) Triggers: RI_ConstraintTrigger_1412526088, RI_ConstraintTrigger_1412526089 hs.exon.2= select count(*) from ga_psr_transcript_1; count --- 43398 (1 row) hs.exon.2= select count(*) from ga_psr_exon_1; count 176908 (1 row) Now if I do a join on the leaf tables everything looks good: hs.exon.2= explain select * from ga_psr_transcript_1 t, ga_psr_exon_1e where e.parent = t.id; QUERY PLAN Merge Join (cost=0.00..9087.71 rows=176908 width=98) Merge Cond: (outer.id = inner.parent) - Index Scan using ga_psr_transcript_1_pkey on ga_psr_transcript_1 t (cost=0.00..1066.17 rows=43398 width=47) - Index Scan using ga_psr_exon_1_parent on ga_psr_exon_1 e (cost=0.00..5259.52 rows=176908 width=51) (4 rows) If I do a join on the parent table, the optimizer refuses to use the indicies: hs.exon.2= explain select * from ga_psr_transcript t, ga_psr_exon e where e.parent = t.id; In this case, you can't use a single index scan to get the rows in order so the part that makes the above a nice plan doesn't really apply. If you're getting all the rows and sorting them, index scans are probably a waste of time unless you have alot of dead space. If we supported multi-table indexes, that'd potentially let you get a plan like the above. --- Merge Join (cost=1239155.37..70188119.40 rows=5514877218 width=334) Merge Cond: (outer.id = inner.parent) - Sort (cost=243481.37..244816.14 rows=533908 width=165) Sort Key: t.id - Append (cost=0.00..10980.08 rows=533908 width=165) [lots of seqscans snipped] - Sort (cost=995674.00..1000838.64 rows=2065853 width=169) Sort Key: e.parent - Append (cost=0.00..43563.52 rows=2065853 width=169) [more seqscans snipped] Same thing even if I'm querying for a specific tuple: hs.exon.2= explain select * from ga_psr_transcript t, ga_psr_exon e where e.parent = t.id and t.id = 123; ISTM it's willing to use an index scan on at least some of t's subtables. Does explicitly saying e.parent=123 help? QUERY PLAN --- Hash Join (cost=99.06..73488.33 rows=320207 width=334) Hash Cond: (outer.parent = inner.id) - Append (cost=0.00..43563.52 rows=2065853 width=169) [lots of seqscans snipped
[GENERAL] Failure to install 7.3.3
I get an error during config I have redhat 7.2. readline library and Zlib library not found...they are installed with the latest version how do I fix this? -Dan ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] Database design problem: multilingual strings
In looking at your ideas, a thought came to mind. This issue is something I've been looking at neediing to address, so any comments are welcome. Whatever the first entry of a string entered becomes the reference string. You could add a field for that.The design below needs a unique index on: Translations( string_id, lang_id ); Translations( string_id, lang_id, lang_string ); Languages ( lang_name_full_eng ); Languages (iso_latin_abbrev ); CREATE TABLE StringIDs( string_id serial NOT NULL PRIMARY KEY ); CREATE TABLE Languages( lang_id serial NOT NULL PRIMARY KEY, lang_name_full_eng varchar(30) NOT NULL, iso_latin_abbrev varchar(2) NOT NULL, ); CREATE TABLE Translations( string_id INT4 NOT NULL, lang_id INT4 NOT NULL, lang_string BYTEA NOT NULL ); ALTER TABLE Translations ADD CONSTRAINT FK_translations_string_id FOREIGN KEY (string_id) REFERENCES StringIDs (string_id); ALTER TABLE Translations ADD CONSTRAINT FK_translations_lang_id FOREIGN KEY (lang_id) REFERENCES Languages (lang_id); The design above needs a unique index on: Translations( string_id, lang_id ); Translations( string_id, lang_id, lang_string ); Languages ( lang_name_full_eng ); Languages (iso_latin_abbrev ); Antonios Christofides wrote: Hi, I'm designing a database with a web interface, which will be accessed by international users. The French may be requesting/entering information in French, the Greeks in Greek, and the Japanese in Japanese. I want every string in the database to be multilingual. Let's use a hypothetical example: simple lookup table cutlery_types: id description 1Spoon 2Fork 3Knife 4Teaspoon 'description' is no longer enough; it must be possible to add translations to _any_ language and to any number of languages. I've thought of a number of solutions, but none satisfies me to the point that I'd feel ready to die :-) I'd much appreciate comments/experience from anyone. I include the solutions I've thought of below, but you don't need to read them if you have a good pointer in hand. Thanks a lot! Solution 1 -- table cutlery_types_description_translations id language translation -- 1 fr Cuilliere 1 el Koutali 2 fr Forchette 2 es Tenedor (or language can be id fk to languages table) Clean solution, but... an additional table for each string in the database?! The 50 tables will quickly become 300 :-( Solution 2 -- translations id language translation - Spoon fr Cuilliere Spoon el Koutali Fork fr Forchette Fork es Tenedor Not possible, because it uses the English version of the string as an id. What if the English version is a 300-word essay? What if the English version changes? What if no English version exists for that particular string? Solution 3 -- cutlery_types id description -- 1 { Spoon, Cuilliere, , Koutali } 2 { Fork, Forchette, Tenedor, } Where, obviously, a languages table tells that 1 is English, 2 is French, 3 is Spanish and 4 is Greek. One of the problems with this solution is that if I want to add a translation for language 45, I need to insert an empty string for the previous 44 languages. Solution 4 -- cutlery_types id description --- 1 Some way to represent a hash: 'en' = 'Spoon', 'fr' = 'Cuilliere' etc. 2 'en' = 'Fork', 'fr' = 'Forchette', 'es' = 'Tenedor' The description could be, for example, a TEXT containing all translations separated by some kind of separator, or an array whose odd elements may be the hash keys and the even elements the translations. In any case, SELECT id, getstring(description, 'el') FROM cutlery_types would use the user-defined function getstring to retrieve the needed translation. Far from certain on how efficient it can be done. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Failure to install 7.3.3
Red Hat (and most other RPM based distros) split libraries into runtime and development halves. In order to actually compile software against the libraries, you must install the development half. So, to compile against readline, you must also install the readline-devel RPM. Daniel E. Fisher wrote: I get an error during config I have redhat 7.2. readline library and Zlib library not found...they are installed with the latest version how do I fix this? -Dan -- __ / | Paul Ramsey | Refractions Research | Email: [EMAIL PROTECTED] | Phone: (250) 885-0632 \_ ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] COPY, but not everything...
I have a big ascii text file that I would only like to import columns 1, 3 and 7 from. I know I can filter it through cut, but I wonder if there is an undocumented way to do it with COPY. Sybase has a useless function called filler() that you use for fields you want to ignore. For example (psuedocode...) COPY mytable (col1, filler(), filler(), col2, filler(), col3) FROM '/tmp/foobar'; I like the new functionality allowing me to ignore columns in the target table, and to rearrange the columns, but this functionality seems to be missing... (Or I am reading it wrong...) Thanks! Ian Harding Programmer/Analyst II Tacoma-Pierce County Health Department [EMAIL PROTECTED] Phone: (253) 798-3549 Pager: (253) 754-0002 ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Inheritance Indexes
On Tue, 24 Jun 2003, Stephan Szabo wrote: hs.exon.2= explain select * from ga_psr_transcript_1 t, ga_psr_exon_1e where e.parent = t.id; QUERY PLAN Merge Join (cost=0.00..9087.71 rows=176908 width=98) Merge Cond: (outer.id = inner.parent) - Index Scan using ga_psr_transcript_1_pkey on ga_psr_transcript_1 t (cost=0.00..1066.17 rows=43398 width=47) - Index Scan using ga_psr_exon_1_parent on ga_psr_exon_1 e (cost=0.00..5259.52 rows=176908 width=51) (4 rows) If I do a join on the parent table, the optimizer refuses to use the indicies: hs.exon.2= explain select * from ga_psr_transcript t, ga_psr_exon e where e.parent = t.id; In this case, you can't use a single index scan to get the rows in order so the part that makes the above a nice plan doesn't really apply. If you're getting all the rows and sorting them, index scans are probably a waste of time unless you have alot of dead space. If we supported multi-table indexes, that'd potentially let you get a plan like the above. Because of the foreign key constraint, the database engine could do the above query on each of the child tables and concatenate the results. This is because there is a notion in our schema of paired inheritance where both the ga_psr_exon ahd ga_psr_transcript tables are subclassed by choromosome. IE: ga_psr_exon_1.parent -- ga_psr_transcript_1.id Of course the foreign key is the only indication of this and I can't say that I'm entirely surprised that the optimizer doesn't catch this. This constraint unfortunately breaks down when joining on a non-foreign key, such as range queries (the rows in these tables represent ranges in a one dimensional space) like: explain select * from ga_psr_exon_1 e1, ga_psr_exon_1 e2 where e1.start = e2.start and e1.start=e2.stop and e1.stop = e2.start and e1.stop = e2.stop; Nested Loop (cost=0.00..995313942.65 rows=386375808 width=102) - Seq Scan on ga_psr_exon_1 e1 (cost=0.00..3691.08 rows=176908 width=51) - Index Scan using ga_psr_exon_1_start_stop on ga_psr_exon_1 e2 (cost=0.00..5582.47 rows=2184 width=51) Index Cond: ((outer.start = e2.start) AND (outer.stop = e2.start) AND (outer.start = e2.stop) AND (outer.stop = e2.stop)) versus explain select * from ga_psr_exon e1, ga_psr_exon e2 where e1.start = e2.start and e1.start=e2.stop and e1.stop = e2.start and e1.stop = e2.stop; which results in a nested loop of seq scans. (It is actually worse than this as we would really like to query for overlapping ranges, not containment.) Currently we use either a perl middleware or UNIONs to explicitly force these paired table relationships. --- Merge Join (cost=1239155.37..70188119.40 rows=5514877218 width=334) Merge Cond: (outer.id = inner.parent) - Sort (cost=243481.37..244816.14 rows=533908 width=165) Sort Key: t.id - Append (cost=0.00..10980.08 rows=533908 width=165) [lots of seqscans snipped] - Sort (cost=995674.00..1000838.64 rows=2065853 width=169) Sort Key: e.parent - Append (cost=0.00..43563.52 rows=2065853 width=169) [more seqscans snipped] Same thing even if I'm querying for a specific tuple: hs.exon.2= explain select * from ga_psr_transcript t, ga_psr_exon e where e.parent = t.id and t.id = 123; ISTM it's willing to use an index scan on at least some of t's subtables. Does explicitly saying e.parent=123 help? Yes, adding e.parent=123 results in the desired result of index scans into both tables. However, without including this the optimizer still predicts 31 results from the index scans on ga_psr_transcript* and yet insists on using a seq scan into each ga_psr_exon* table. It expects to get 2065853 rows back from the ga_psr_exon* tables when in reality it is more like 310 rows. Thanks for the response. -Alan FWIW, we subclass by chromosome for performance reasons. We have tables (at the chromosome level) with upwards of 6 million rows against which we run a variety of data mining queries. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Failure to install 7.3.3
On Tue, 24 Jun 2003, Daniel E. Fisher wrote: I get an error during config I have redhat 7.2. readline library and Zlib library not found...they are installed with the latest version how do I fix this? you need the -devel versions of those RPMs. i.e. readline-devel etc... ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] A creepy story about dates. How to prevent it?
We are actually considering not honoring locale for initdb encodings, so it might make no sense to do this --- that another reason for the question mark, but until we decide, it is an open issue. --- Lincoln Yeoh wrote: At 03:24 PM 6/23/2003 -0400, Bruce Momjian wrote: Added to TODO, with question mark: * Have initdb set DateStyle based on locale? Given various issues with locale (indexes, ordering etc) I'd think that having a DB follow the O/S locale should be special case and require explicit configuration. More so if certain locales are significantly slower than others which seemed to be the case at least in recent memory. What if a European DB backed website is hosted on a US server with English, French and German data? If apps/programs are talking to DBs more than people are then it may make more sense to store things in an application friendly format e.g. (date = -MM-DD, or seconds since epoch) format and having the app convert it based on the user's preferences. After all even in English, apps may choose to display Tuesday as T, Tue, Tuesday, or whatever the Boss wants. Unless postgresql has special features allowing switching from one locale to another on the fly (including indexes, ordering etc) within a DB session, I'd rather stick to say the C locale, or whatever it is that's fastest. Another point of consideration: if someone accidentally loads multibyte/other locale data into a C locale DB (or whatever is chosen as default DB locale), would dumping the loaded data and reloading it into a multibyte locale result in information/precision loss? Link. ---(end of broadcast)--- TIP 8: explain analyze is your friend -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] A creepy story about dates. How to prevent it?
I thought it was more correctly we were considering not using the the system locale automatically, but that if someone wished to use --locale=en_US we'd let that work, right? I would assume that if someone actually went to the bother of setting a locale, then it should be the deciding factor in how we handle dates, et. al. On Tue, 24 Jun 2003, Bruce Momjian wrote: We are actually considering not honoring locale for initdb encodings, so it might make no sense to do this --- that another reason for the question mark, but until we decide, it is an open issue. --- Lincoln Yeoh wrote: At 03:24 PM 6/23/2003 -0400, Bruce Momjian wrote: Added to TODO, with question mark: * Have initdb set DateStyle based on locale? Given various issues with locale (indexes, ordering etc) I'd think that having a DB follow the O/S locale should be special case and require explicit configuration. More so if certain locales are significantly slower than others which seemed to be the case at least in recent memory. What if a European DB backed website is hosted on a US server with English, French and German data? If apps/programs are talking to DBs more than people are then it may make more sense to store things in an application friendly format e.g. (date = -MM-DD, or seconds since epoch) format and having the app convert it based on the user's preferences. After all even in English, apps may choose to display Tuesday as T, Tue, Tuesday, or whatever the Boss wants. Unless postgresql has special features allowing switching from one locale to another on the fly (including indexes, ordering etc) within a DB session, I'd rather stick to say the C locale, or whatever it is that's fastest. Another point of consideration: if someone accidentally loads multibyte/other locale data into a C locale DB (or whatever is chosen as default DB locale), would dumping the loaded data and reloading it into a multibyte locale result in information/precision loss? Link. ---(end of broadcast)--- TIP 8: explain analyze is your friend ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Inheritance Indexes
On Tue, 24 Jun 2003, Alan Williams wrote: On Tue, 24 Jun 2003, Stephan Szabo wrote: hs.exon.2= explain select * from ga_psr_transcript_1 t, ga_psr_exon_1e where e.parent = t.id; QUERY PLAN Merge Join (cost=0.00..9087.71 rows=176908 width=98) Merge Cond: (outer.id = inner.parent) - Index Scan using ga_psr_transcript_1_pkey on ga_psr_transcript_1 t (cost=0.00..1066.17 rows=43398 width=47) - Index Scan using ga_psr_exon_1_parent on ga_psr_exon_1 e (cost=0.00..5259.52 rows=176908 width=51) (4 rows) If I do a join on the parent table, the optimizer refuses to use the indicies: hs.exon.2= explain select * from ga_psr_transcript t, ga_psr_exon e where e.parent = t.id; In this case, you can't use a single index scan to get the rows in order so the part that makes the above a nice plan doesn't really apply. If you're getting all the rows and sorting them, index scans are probably a waste of time unless you have alot of dead space. If we supported multi-table indexes, that'd potentially let you get a plan like the above. Because of the foreign key constraint, the database engine could do the above query on each of the child tables and concatenate the results. This is because there is a notion in our schema of paired I don't think it can do exon_1 - transcript_1 union exon_2 - transcript_2 etc from the above unless there's also a guarantee of uniqueness since if the same id showed up in transcript_1 and transcript_2 you'd have to join them both to a parent in exon_1. The individual id primary keys are not sufficient to show that though so you'd have to join exon_1 - transcript_1 union exon_1 - transcript_2 union exon_2 - transcript_1... to guarantee the same results I think. I don't think that we're ever likely to figure out the optimization for those cases in any case. Multi-table indexes will probably be coming eventually which will allow a scan over that rather than the append step. --- Merge Join (cost=1239155.37..70188119.40 rows=5514877218 width=334) Merge Cond: (outer.id = inner.parent) - Sort (cost=243481.37..244816.14 rows=533908 width=165) Sort Key: t.id - Append (cost=0.00..10980.08 rows=533908 width=165) [lots of seqscans snipped] - Sort (cost=995674.00..1000838.64 rows=2065853 width=169) Sort Key: e.parent - Append (cost=0.00..43563.52 rows=2065853 width=169) [more seqscans snipped] Same thing even if I'm querying for a specific tuple: hs.exon.2= explain select * from ga_psr_transcript t, ga_psr_exon e where e.parent = t.id and t.id = 123; ISTM it's willing to use an index scan on at least some of t's subtables. Does explicitly saying e.parent=123 help? Yes, adding e.parent=123 results in the desired result of index scans into both tables. However, without including this the optimizer still predicts 31 results from the index scans on ga_psr_transcript* and yet insists on using a seq scan into each ga_psr_exon* table. It expects to get 2065853 rows back from the ga_psr_exon* tables when in reality it is more like 310 rows. Yeah, it's guessing the number of rows rather poorly. Without the implied search condition, the index scan wouldn't help barring a small estimated number of rows in t making nested loop look good (I assume the t estimate is way off too, does analyzing the various tables or possibly raising the analyze buckets for the id column and analyzing get that estimate to something reasonable?). I *think* 7.4 may be smarter about implying these conditions as well. ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Failure to install 7.3.3
Paul Ramsey wrote: Red Hat (and most other RPM based distros) split libraries into runtime and development halves. In order to actually compile software against the libraries, you must install the development half. So, to compile against readline, you must also install the readline-devel RPM. Daniel E. Fisher wrote: I get an error during config I have redhat 7.2. readline library and Zlib library not found...they are installed with the latest version how do I fix this? Also, even though the SRPM dependencies don't explicitly state a requirement for them, I have found that I must also install termcap-devel to get readline support to build. Mike Mascari [EMAIL PROTECTED] ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] 7.3.3 install under way
I go 7.3.3 installed Paul thanks...here is what is left I am trying a php bulletin board by phpbb. I don't think I have the server configured all the way yet. Any help is appreciated on how to get this straight Warning: Unable to connect to PostgreSQL server: FATAL: No pg_hba.conf entry for host 204.213.206.27, user postgres, database nm in /var/www/html/crohns/phpBB2/db/postgres7.php on line 79 phpBB : Critical Error Could not connect to the database This is in the pg_hba.conf file www.websiteIamusing.com all themachine'sIP 255.255.255.255 trust I am sure I need to edit something yet. -Dan ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] capturing and storing query statement with rules
I would like to create a audit log of update and delete queries against a table. I want the log on the server side, since access will be done using various clients via ODBC. I would like to capture the user, current datetime and the query statement. My thought was to accomplish this by creating a rule that will insert the information into a log table. How can I captures the query statement so that I can place it in a insert query? Is there a variable in the server that holds the query statement as a string? Am I about to go in over my head (again)? Thanks, Andrew ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] pg_dumping schemes (not schema)
Although I group my data topically in databases, most of the data is integrated on the client side (MSAccess). That is, tables from different databases are often joined together during analysis. I'm thinking of simplifying analysis and administration by consolidating the databases and separating the tables into topics using schemes. Unfortunately, the resulting data dumps would be huge. The alternative, pg_dumping individual tables, would result in a very high number of backup files to manage. Is there a way to pg_dump a scheme at a time? Thanks, Andrew Gould ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] capturing and storing query statement with rules
Le Mardi 24 Juin 2003 20:17, Andrew Gould a écrit : I would like to create a audit log of update and delete queries against a table. I want the log on the server side, since access will be done using various clients via ODBC. I would like to capture the user, current datetime and the query statement. My thought was to accomplish this by creating a rule that will insert the information into a log table. To be server side, you need to use a trigger and a plpgsql function. On this page you will find more informations on trigger related functions: http://www.postgresql.org/docs/view.php?version=7.3idoc=0file=plpgsql-trigger.html Perhaps you can compare OLD and NEW rows to know which rows will be updated. And you'll find at the end of the page how to get current user (current_user keyword) and current datetime (function now). How can I captures the query statement so that I can place it in a insert query? Is there a variable in the server that holds the query statement as a string? I don't think... or wasn't able to find something like this one in the documentation. Regards. -- Guillaume !-- http://absfr.tuxfamily.org/ --. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] 7.3.3 install under way
snip This is in the pg_hba.conf file www.websiteIamusing.com all themachine'sIP 255.255.255.255 trust I am sure I need to edit something yet. host is a keyword (for tcp/ip connections as opposed to keyword local for local domain socket connections), it is not a hostname. Try: host all the.machine.ip.address 255.255.255.255 Cheers, Steve ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] capturing and storing query statement with rules
Guillaume LELARGE wrote: Le Mardi 24 Juin 2003 20:17, Andrew Gould a écrit : I would like to create a audit log of update and delete queries against a table. I want the log on the server side, since access will be done using various clients via ODBC. I would like to capture the user, current datetime and the query statement. My thought was to accomplish this by creating a rule that will insert the information into a log table. To be server side, you need to use a trigger and a plpgsql function. On this page you will find more informations on trigger related functions: http://www.postgresql.org/docs/view.php?version=7.3idoc=0file=plpgsql-trigger.html Perhaps you can compare OLD and NEW rows to know which rows will be updated. And you'll find at the end of the page how to get current user (current_user keyword) and current datetime (function now). How can I captures the query statement so that I can place it in a insert query? Is there a variable in the server that holds the query statement as a string? I think this shows how to do what you want: http://archives.postgresql.org/pgsql-sql/2003-05/msg00301.php HTH, Joe ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] A creepy story about dates. How to prevent it?
Good point. --- scott.marlowe wrote: I thought it was more correctly we were considering not using the the system locale automatically, but that if someone wished to use --locale=en_US we'd let that work, right? I would assume that if someone actually went to the bother of setting a locale, then it should be the deciding factor in how we handle dates, et. al. On Tue, 24 Jun 2003, Bruce Momjian wrote: We are actually considering not honoring locale for initdb encodings, so it might make no sense to do this --- that another reason for the question mark, but until we decide, it is an open issue. --- Lincoln Yeoh wrote: At 03:24 PM 6/23/2003 -0400, Bruce Momjian wrote: Added to TODO, with question mark: * Have initdb set DateStyle based on locale? Given various issues with locale (indexes, ordering etc) I'd think that having a DB follow the O/S locale should be special case and require explicit configuration. More so if certain locales are significantly slower than others which seemed to be the case at least in recent memory. What if a European DB backed website is hosted on a US server with English, French and German data? If apps/programs are talking to DBs more than people are then it may make more sense to store things in an application friendly format e.g. (date = -MM-DD, or seconds since epoch) format and having the app convert it based on the user's preferences. After all even in English, apps may choose to display Tuesday as T, Tue, Tuesday, or whatever the Boss wants. Unless postgresql has special features allowing switching from one locale to another on the fly (including indexes, ordering etc) within a DB session, I'd rather stick to say the C locale, or whatever it is that's fastest. Another point of consideration: if someone accidentally loads multibyte/other locale data into a C locale DB (or whatever is chosen as default DB locale), would dumping the loaded data and reloading it into a multibyte locale result in information/precision loss? Link. ---(end of broadcast)--- TIP 8: explain analyze is your friend -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] capturing and storing query statement with rules
Joe Conway wrote: Guillaume LELARGE wrote: Le Mardi 24 Juin 2003 20:17, Andrew Gould a écrit : I would like to create a audit log of update and delete queries against a table. I want the log on the server side, since access will be done using various clients via ODBC. I would like to capture the user, current datetime and the query statement. My thought was to accomplish this by creating a rule that will insert the information into a log table. And you'll find at the end of the page how to get current user (current_user keyword) and current datetime (function now). How can I captures the query statement so that I can place it in a insert query? Is there a variable in the server that holds the query statement as a string? I think this shows how to do what you want: http://archives.postgresql.org/pgsql-sql/2003-05/msg00301.php Maybe debug_query_string should be mapped into a variable like CURRENT_USER? Perhaps something like CURRENT_QUERY? Mike Mascari [EMAIL PROTECTED] ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] Failure to install 7.3.3
If that's true, then termcap-devel should probably be added to the SRPM dependencies. Isn't the RPM maintainer on this list? (wink wink) Wes Mike Mascari [EMAIL PROTECTED]@postgresql.org on 06/24/2003 03:32:22 PM Sent by:[EMAIL PROTECTED] To:Paul Ramsey [EMAIL PROTECTED] cc:Daniel E. Fisher [EMAIL PROTECTED], [EMAIL PROTECTED] Subject:Re: [GENERAL] Failure to install 7.3.3 Paul Ramsey wrote: Red Hat (and most other RPM based distros) split libraries into runtime and development halves. In order to actually compile software against the libraries, you must install the development half. So, to compile against readline, you must also install the readline-devel RPM. Daniel E. Fisher wrote: I get an error during config I have redhat 7.2. readline library and Zlib library not found...they are installed with the latest version how do I fix this? Also, even though the SRPM dependencies don't explicitly state a requirement for them, I have found that I must also install termcap-devel to get readline support to build. Mike Mascari [EMAIL PROTECTED] ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 8: explain analyze is your friend
[GENERAL] server layout
Hi out there, I am wondering what is the right dimension for a server I have to layout an mantain. So, here are the specifiactions I know as far as I can know them already: - approx. 15 Users - at end of month many of them at the same time online - ca 30-40 gb data overal (few but big tables) - data input rarly (one user) - other just do queries and may be reports - no other functionality than running postgresql What I decieded to buy is: 1-2 Intel Xeon 3Ghz, 2 Gb Ram, SCSI 10k 72 gb disk for data, linux based, backup on ide hot plug disks I hope, it is the right list to ask this question; if not: sorry Regards Joern -- Linux is like wigwam - no windows, no gates, apache inside. signature.asc Description: Dies ist ein digital signierter Nachrichtenteil
[GENERAL] Alternative replication method.
Has anyone used PostgreSQL with Drbd (http://www.complang.tuwien.ac.at/reisner/drbd/)? They claim it works with PostgreSQL. Do the PostgreSQL hackers forsee any issues with this type of replication method? Do you think this method (combined with some other HA utilities) could work well as a hot-spare database server? -- Best Regards, Mike Benoit NetNation Communications Inc. Systems Engineer Tel: 604-684-6892 or 888-983-6600 --- Disclaimer: Opinions expressed here are my own and not necessarily those of my employer ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] Alternative replication method.
I haven't used this. It would probably work, but there's also another way just using plain SCSI. You can attach both your main machine and a hot standby to an external RAID array. When the main machine goes down, just mount the RAID array yourself. However, you should also install a serial power switch so that you can kill the other server dead to prevent possible damage. Jon On 24 Jun 2003, Mike Benoit wrote: Has anyone used PostgreSQL with Drbd (http://www.complang.tuwien.ac.at/reisner/drbd/)? They claim it works with PostgreSQL. Do the PostgreSQL hackers forsee any issues with this type of replication method? Do you think this method (combined with some other HA utilities) could work well as a hot-spare database server? -- Best Regards, Mike Benoit NetNation Communications Inc. Systems Engineer Tel: 604-684-6892 or 888-983-6600 --- Disclaimer: Opinions expressed here are my own and not necessarily those of my employer ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED]) ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] capturing and storing query statement with rules
http://gborg.postgresql.org/project/tablelog/download/download.php or http://gborg.postgresql.org/project/audittrail/download/download.php?branch=devel I haven't used either, but table_log seems to be one that I may start using in the near future. On Tue, 2003-06-24 at 13:17, Andrew Gould wrote: I would like to create a audit log of update and delete queries against a table. I want the log on the server side, since access will be done using various clients via ODBC. I would like to capture the user, current datetime and the query statement. My thought was to accomplish this by creating a rule that will insert the information into a log table. How can I captures the query statement so that I can place it in a insert query? Is there a variable in the server that holds the query statement as a string? Am I about to go in over my head (again)? Thanks, Andrew ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings -- Best Regards, Mike Benoit NetNation Communications Inc. Systems Engineer Tel: 604-684-6892 or 888-983-6600 --- Disclaimer: Opinions expressed here are my own and not necessarily those of my employer ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] server layout
On 24 Jun 2003, Joern Muehlencord wrote: Hi out there, I am wondering what is the right dimension for a server I have to layout an mantain. So, here are the specifiactions I know as far as I can know them already: - approx. 15 Users - at end of month many of them at the same time online - ca 30-40 gb data overal (few but big tables) - data input rarly (one user) - other just do queries and may be reports - no other functionality than running postgresql What I decieded to buy is: 1-2 Intel Xeon 3Ghz, 2 Gb Ram, SCSI 10k 72 gb disk for data, linux based, backup on ide hot plug disks I hope, it is the right list to ask this question; if not: sorry General's always a good place to start. There's also a performance tuning list, for when you get it up and running and want it faster, and an admin list for folks who administer boxes. It's a good setup. Since there's little writing, there's no need to put the WAL files elsewhere, but I would recommend adding a second SCSI disk and mirroring it with the first for better performance. Be sure and test the machine thoroughly, especially your memory. look at www.memtest86.com for a decent memory tester. The time to find single bit memory errors is before you go into production. Test the drives thoroughly too, but being in a RAID1 will ensure that any single bit failure will result in the drive getting kicked out of the RAID set, so RAID is generally a safer bet right from the start. Be sure and read up on the administrators guide on how to tune your postgresql.conf file. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] A creepy story about dates. How to prevent it?
Wasn't a 'set' command also discussed to override locale? On Tue, 2003-06-24 at 16:02, Bruce Momjian wrote: Good point. --- scott.marlowe wrote: I thought it was more correctly we were considering not using the the system locale automatically, but that if someone wished to use --locale=en_US we'd let that work, right? I would assume that if someone actually went to the bother of setting a locale, then it should be the deciding factor in how we handle dates, et. al. On Tue, 24 Jun 2003, Bruce Momjian wrote: We are actually considering not honoring locale for initdb encodings, so it might make no sense to do this --- that another reason for the question mark, but until we decide, it is an open issue. --- Lincoln Yeoh wrote: At 03:24 PM 6/23/2003 -0400, Bruce Momjian wrote: Added to TODO, with question mark: * Have initdb set DateStyle based on locale? Given various issues with locale (indexes, ordering etc) I'd think that having a DB follow the O/S locale should be special case and require explicit configuration. More so if certain locales are significantly slower than others which seemed to be the case at least in recent memory. What if a European DB backed website is hosted on a US server with English, French and German data? If apps/programs are talking to DBs more than people are then it may make more sense to store things in an application friendly format e.g. (date = -MM-DD, or seconds since epoch) format and having the app convert it based on the user's preferences. After all even in English, apps may choose to display Tuesday as T, Tue, Tuesday, or whatever the Boss wants. Unless postgresql has special features allowing switching from one locale to another on the fly (including indexes, ordering etc) within a DB session, I'd rather stick to say the C locale, or whatever it is that's fastest. Another point of consideration: if someone accidentally loads multibyte/other locale data into a C locale DB (or whatever is chosen as default DB locale), would dumping the loaded data and reloading it into a multibyte locale result in information/precision loss? Link. -- +---+ | Ron Johnson, Jr. Home: [EMAIL PROTECTED] | | Jefferson, LA USA http://members.cox.net/ron.l.johnson | | | | Oh, great altar of passive entertainment, bestow upon me | | thy discordant images at such speed as to render linear | | thought impossible (Calvin, regarding TV) | +--- ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] Physical Database Configuration
I know the current method for specifying alternate drives for PG tables is by using symlinks. I had some ideas for simple ways to do this in PG code, but wanted to know if anyone was working on this right now. I'd hate to take the time to start messing with this if others were already on it. Jon ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] A creepy story about dates. How to prevent it?
There's already a DateStyle guc that can be set. Would that be the one, or would we need something else? On 24 Jun 2003, Ron Johnson wrote: Wasn't a 'set' command also discussed to override locale? On Tue, 2003-06-24 at 16:02, Bruce Momjian wrote: Good point. --- scott.marlowe wrote: I thought it was more correctly we were considering not using the the system locale automatically, but that if someone wished to use --locale=en_US we'd let that work, right? I would assume that if someone actually went to the bother of setting a locale, then it should be the deciding factor in how we handle dates, et. al. On Tue, 24 Jun 2003, Bruce Momjian wrote: We are actually considering not honoring locale for initdb encodings, so it might make no sense to do this --- that another reason for the question mark, but until we decide, it is an open issue. --- Lincoln Yeoh wrote: At 03:24 PM 6/23/2003 -0400, Bruce Momjian wrote: Added to TODO, with question mark: * Have initdb set DateStyle based on locale? Given various issues with locale (indexes, ordering etc) I'd think that having a DB follow the O/S locale should be special case and require explicit configuration. More so if certain locales are significantly slower than others which seemed to be the case at least in recent memory. What if a European DB backed website is hosted on a US server with English, French and German data? If apps/programs are talking to DBs more than people are then it may make more sense to store things in an application friendly format e.g. (date = -MM-DD, or seconds since epoch) format and having the app convert it based on the user's preferences. After all even in English, apps may choose to display Tuesday as T, Tue, Tuesday, or whatever the Boss wants. Unless postgresql has special features allowing switching from one locale to another on the fly (including indexes, ordering etc) within a DB session, I'd rather stick to say the C locale, or whatever it is that's fastest. Another point of consideration: if someone accidentally loads multibyte/other locale data into a C locale DB (or whatever is chosen as default DB locale), would dumping the loaded data and reloading it into a multibyte locale result in information/precision loss? Link. ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Thousands of semops for every i/o
Patch applied. Thanks. --- Jeffrey W. Baker wrote: On Mon, 2003-06-09 at 23:08, Tom Lane wrote: Jeffrey W. Baker [EMAIL PROTECTED] writes: This is the strace of a process which is deleting rows from four tables, inside a transaction, one row at a time. There are a lot of semops for every i/o. There are about 30 connections to this database currently. I thought deletes in a transaction just flew along in Pg, because they simply wrote the deleted transaction ID on the record. It used to work fine in my previous locally-built 7.2 on Debian, but this is 7.2.2 on SuSE Enterprise Server 8.2. The first thing that comes to mind is that the thing is using SysV semaphores as a substitute for spinlocks. If this is on a hardware platform that is supposed to have TAS() support in s_lock.h or s_lock.c, then it's a configuration or build error. If it's on some heretofore unknown platform, you need to write some TAS() code to get decent performance. It looks like a simple change in s_lock.h from #if defined(__i386__) to #if defined(__i386__) || defined(__x86_64__) Will be necessary for this platform. Thanks, jwb ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] server layout
They make 15k disks now. Max memory and fastest disk seem to keep people happy :-) Joern Muehlencord wrote: Hi out there, I am wondering what is the right dimension for a server I have to layout an mantain. So, here are the specifiactions I know as far as I can know them already: - approx. 15 Users - at end of month many of them at the same time online - ca 30-40 gb data overal (few but big tables) - data input rarly (one user) - other just do queries and may be reports - no other functionality than running postgresql What I decieded to buy is: 1-2 Intel Xeon 3Ghz, 2 Gb Ram, SCSI 10k 72 gb disk for data, linux based, backup on ide hot plug disks I hope, it is the right list to ask this question; if not: sorry Regards Joern ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings