Re: [GENERAL] Installation on CentOS 5.2 (readline trouble)
Tom Lane wrote: libreadline depends on libtermcap in RHEL-5, but so far as I can see this dependency is explicit in the RPM, so it shouldn't have been possible to not install termcap. (However, I'm not sure just how bright the dependency solver was in RHEL-5 ... maybe it let you install a 64-bit readline but only the 32-bit version of termcap?) I suspected something like that after the results of a Google search (before posting here) --- I tried termcap-devel, and it tells me that there's no such package; then I noticed that there is the package termcap, and also libtermcap, with the accompanying libtermcap-devel. Turns out that, as Greg pointed out, it was the missing ncurses-devel. Strangely enough, the little dummy test program still reports the exact same errors when compiling with -lreadline. But PostgreSQL's configure script now runs successfully! Thanks, and thanks Greg for the valuable tip ! Carlos -- -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Installation on CentOS 5.2 (readline trouble)
Hi, I just downloaded the latest, 8.3.4, and I'm trying to install it on a CentOS 5.2 machine with all the updates (64-bit --- the system is an Opteron DC) The configure script fails reporting it doesn't find readline. The package readline-devel *is* installed, and indeed the .so file is under /usr/lib and /usr/lib64 The problem seems to be with readline dependencies: if I try a dummy program (i.e., a file with a main that says return 0;) and compile it with readline: [ ... ]$ gcc -o tt test.c -lreadline I get the following bunch of errors: gcc -o tt test.c -lreadline /usr/lib/gcc/x86_64-redhat-linux/4.1.2/../../../../lib64/libreadline.so: undefined reference to `PC' /usr/lib/gcc/x86_64-redhat-linux/4.1.2/../../../../lib64/libreadline.so: undefined reference to `tgetflag' /usr/lib/gcc/x86_64-redhat-linux/4.1.2/../../../../lib64/libreadline.so: undefined reference to `tgetent' /usr/lib/gcc/x86_64-redhat-linux/4.1.2/../../../../lib64/libreadline.so: undefined reference to `UP' /usr/lib/gcc/x86_64-redhat-linux/4.1.2/../../../../lib64/libreadline.so: undefined reference to `tputs' /usr/lib/gcc/x86_64-redhat-linux/4.1.2/../../../../lib64/libreadline.so: undefined reference to `tgoto' /usr/lib/gcc/x86_64-redhat-linux/4.1.2/../../../../lib64/libreadline.so: undefined reference to `tgetnum' /usr/lib/gcc/x86_64-redhat-linux/4.1.2/../../../../lib64/libreadline.so: undefined reference to `BC' /usr/lib/gcc/x86_64-redhat-linux/4.1.2/../../../../lib64/libreadline.so: undefined reference to `tgetstr' I know the problem seems to be more a CentOS or RH specific thing, than a PostgreSQL specific one, but I guess someone out here may have run into this, and perhaps have some suggestions? I also tried with libedit (downloaded it from sourceforge, since I could not find RPM packages native to the distribution). Anyway, I get the exact same error --- configure reports the same error, and the little test program, when compiled with -ledit instead of -lreadline, produces almost the same error messages (six of the above lines --- the ones in lowercase, coincidentally) The installation was completed with the switch --without-readline, except that the functionality is somwhat crippled, so I'd much rather avoid the issue. Thanks for any tips/pointers! Carlos -- -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Please change default characterset for database cluster
CN wrote: Hi! initdb use SQL_ASCII as the default characterset encoding when it is not given option -E and when it can not correctly derive one from locale. I suggest initdb use UNICODE instead of SQL_ASCII because UNICODE is far more useful than SQL_ASCII. Not all webmasters are willing to spend time reading initdb documentation. I have encountered a free web hosting providing PhpPgAdmin through which I can create my databases. Problem is that all newly created databases use SQL_ASCII which is completely useless to me. Their PhpPgAdmin script does not support -E switch for createdb. As a result, I have to abandon that service all together. Was initdb using UNICODE as the default characterset, everthing would be perfect. In addition to the general comment that the world does not necessarily revolve around you, and that you should not expect all software products in the world to be customized to suit *your* needs, I have to highlight how horrifying this is: Not all webmasters are willing to spend time reading initdb documentation. This is truly horrifying --- well, fortunately, one could hope that it is as wrong as the rest of your message; that dumb and lazy end users and computer illiterate people are not willing to spend time reading documentation or instructions is ok... But webmasters and database administrators??? Do you *seriously* expect that some highly complex software like a DB server should be handled by people who are not willing to read documentation That's the most preposterous notion I've read in the last few months! Another detail to add --- for a lot of people, Unicode is a useless feature that has a very important performance hit. For a *very large* fraction of applications, I see it generally advised to use a database with no encoding (which SQL_ASCII essentially is), and in the situations where some locale-aware processing is needed, then the client application can do it. Of course, there are also many many applications where a DB with Unicode encoding is very useful. In those cases, the administrators can create a database with Unicode encoding (you seem to be one of those that are too busy to be willing to spend time reading the documentation of *createdb*), regardless of what default encoding was specified with initdb. Oh, and BTW, welcome to version 8 of PostgreSQL ... The default encoding for initdb is . Ta-d!!! Unicode !!! Carlos -- ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Please change default characterset for database cluster
Michael Fuhr wrote: On Fri, Sep 28, 2007 at 09:32:43PM -0400, Carlos Moreno wrote: Oh, and BTW, welcome to version 8 of PostgreSQL ... The default encoding for initdb is . Ta-d!!! Unicode !!! No, it isn't. If you get UTF8 (formerly UNICODE) as a default then it's because initdb is picking it up from your environment. http://www.postgresql.org/docs/8.2/interactive/app-initdb.html The default is derived from the locale, or SQL_ASCII if that does not work. Right --- I made the over-assumption based on the fact that all the systems where I've installed it (all Fedora flavors of Linux) use UTF8 as system locale, and thus that one in a sense becomes the default ... Not sure about other flavors of Unix, but certainly on the Windows world all bets would be off (not like anyone would care, huh? ;-) ) Carlos -- ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Find smallest common year
Stefan Schwarzer wrote: Sorry, I forgot to mention my table design, which is like this: name 20012002 2003 2004 2005 - Afghanistan Albania (Yep, I know, bad table design :-)). I tried to change it to the more common id | year | value format, but there were too many SQL problems afterwards for the different queries/calculations we need to have) May I suggest that you concentrate on solving *those* problems instead of the programmatically trivial computation of lowest common value? Notice that a *really trivial* programming exercise becomes highly involved in your case --- if I'm understanding correctly what you have, I assume you'd have to check one by one the fields for NULL or non-NULL values --- that's intolerably ugly, IMHO, and it is a very simple and direct consequence of an as-unsound-as-it-gets db/table design. Carlos -- ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Dumping from older version
Raymond O'Donnell wrote: Just wondering - when using a newer pg_dump to dump from an older Postgres, does pg_dump automatically generate INSERT statements for the data rather than using COPY? I noticed this today when transferring data to a newer server - pg_dump generated INSERTs although I didn't ask for them. Not a problem, but I was curious. Maybe you used the switch -d to specify the database? (like with psql and some other client applications). The switch -d in pg_dump goes for Generate inserts instead of COPY commands Double-check the syntax/switches for pg_dump (pg_dump --help) HTH, Carlos -- ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Autostart PostgreSQL in Ubuntu
Johann Maar wrote: But if I try to start PostgreSQL by running sudo /etc/init.d/ postgresql start it will fail because it tries to write a PID file to /var/run/postgresql which does not exist. If I create this directory and set the permissions for postgres to write it works (!), but after the next restart of the machine the directory is already gone. With Red Hat systems, you would do chkconfig postgresql on if you installed the postgresql that they distribute. On Ubuntu (and I imagine with all Debian-based systems), AFAIK you have to manually adjust the init scripts for the runlevels that you want. Assuming that you want postgresql started at runlevels 3 and 5, there should be symlinks in /etc/rc3.d and /etc/rc5.d pointing to /etc/init.d/postgresql (so that the boot sequence runs /etc/init.d/postgresql start for you --- so to speak) These symlinks should be named S??postgresql (where ?? is a two-digit code that indicates the order in which the service is started --- maybe 99 or some high number would be convenient, so that it is started after other services like networking). A ls -l /etc/rc?.d should help you visualize what you need to do. If you're not familiar at all with all this, do a search on runlevels and init scripts; I'm sure you'll find plenty of documents/tutorials out there. Or The big, brute force hammer, would be to add a line in the /etc/rc.local file with the very command that you're running to start it (without sudo, of course, since that'a already being run by root) HTH, Carlos -- ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
[GENERAL] Restricting user access to an absolute minimum?
Hi, I'm trying to give access to *some* data to a third-party user. I want to create a view with a choice of columns and a reduced set of rows (via a where clause as part of the view definition), and give this user access to that *and only that* --- where the and only that goes in the most strict sense possible! The thing is, I create a test user; I log in as that user to the main database (with psql), I type \d and I see the list of tables. Perhaps much worse, I do \d some_table or \d vw_some_view and I see the definition for any arbitrary table or view. How can I avoid all that? That is, I would like the user to be able to do *exclusively* select * from vw_ and see the data that results from that query. I tried logging in as superuser and did: revoke all on schema public from new_user; revoke all on pg_tables from new_user; revoke all on pg_views from new_user; and another half dozen or so, but still the user is able to view whatever it wants to view (not the data, of course --- if I do select * from some_table then of course, I get access denied). There must be solutions to this, right? Any tips appreciated!! Thanks, Carlos -- ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Encoding, Unicode, locales, etc.
Thanks Tom, for your reply. Tom Lane wrote: Carlos Moreno [EMAIL PROTECTED] writes: Why is it that the database cluster is resrticted to a single locale (or single set of locales) instead of being configurable on a per-database basis? Because we depend on libc's locale support, which (on many platforms) isn't designed to switch between locales cheaply [...] This stuff is certainly far from ideal, but the amount of work involved to fix it is daunting; see many past pg-hackers discussions. Fair enough --- and good to know. 2) On the same token (more or less), I have a test database, for which I ran initdb without specifying encoding or locale; then, I create a database with UTF8 encoding. There's no such thing as you didn't specify a locale. If you didn't specify one on the initdb command line, then it was taken from the environment. Try show lc_collate and show lc_ctype to see what got used. Yes, that's what I meant --- I meant that I did not use the --locale or -E command- line switches for the initdb command. Both lc_ctype and lc_collate show en_US.UTF-8 I try lower of a string that contains characters with accents (e.g., Spanish or French characters), and it works as it should according to Spanish or French rules --- it returns a string with the same characters in lowecase, with the same accent. Why did that work? My Linux machine has all en_US.UTF-8 locales, and en_US is not even aware of characters with accents, You sure? I'd sort of expect a UTF8 locale to know this stuff anyway. In any case, Postgres doesn't know anything about case conversion beyond what toupper/tolower tell it, so your experimental result is sufficient proof that that locale includes these conversions. Are you sure there's nothing about the way PostgreSQL interacts with C conversion functions? I ask because, as part of a sanity check, I repeated the tests --- now with two machines; one that has PG 8.1.4, and the other one has 7.4.14, and they behave differently. The one that does the case conversion correctly (read: as I expect it as per Spanish or French rules) is 8.1.4 with en_US locale (LC_CTYPE and LC_COLLATE both showing en_US.UTF-8). PG 7.4.14, *even with locale es_ES*, does not do the case conversion (characters with accent or tilde are left untouched). I wonder if someone could shed some light on this little mystery??? Perhaps to add more confusion to my experimental/informal tests, PG 8.1.4 is running on a FC4 AMD64 X2 box (the command locale at the shell prompt shows all en_US.utf8), and PG 7.4.14 is running on a laptop with FC5 on an Intel Celeron M (the command locale shows exactly the same in that case). Does this perhaps account for the difference? Thanks, Carlos -- ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] Encoding, Unicode, locales, etc.
Hi, Even though I *think* I have a fairly clear understanding of encoding and locale principles, I'm somewhat unsure of how some of the tools available with PostgreSQL apply (or rather, how should they be used). 1) The way I understand it, encoding (character set) and locale are two different things. Yet, I see that initdb allows me to specify the default encoding (I can override it when creating the database) for the databases that I create later, and also the locale(s). Why is it that the database cluster is resrticted to a single locale (or single set of locales) instead of being configurable on a per-database basis? 2) On the same token (more or less), I have a test database, for which I ran initdb without specifying encoding or locale; then, I create a database with UTF8 encoding. Then, from a psql console on the same Linux machine that is running the server, I try lower of a string that contains characters with accents (e.g., Spanish or French characters), and it works as it should according to Spanish or French rules --- it returns a string with the same characters in lowecase, with the same accent. Why did that work? My Linux machine has all en_US.UTF-8 locales, and en_US is not even aware of characters with accents, so it doesn't seem like it's taking by default the encoding from the OS. (is it simply that the case is too obvious so by default case conversion does the obvious thing?) I have several other details in which I'm not too clear, but perhaps with any responses or pointers that I might get for the above, it would clarify the whole confusion? BTW, I did read the online PG documentation --- the section localization; the thing is, with everything that I read in there, my reaction was more or less ok, I knew that; that is, it states facts for which I know (or at least I think I know) the theory, but it did not clarify how to use the given tools. Thanks, Carlos -- ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Linux - postgres RAID
Rick Gigger wrote: I figure this would be a good place to ask. I want to build / buy a new linux postgres box. I was wondering if anyone on this list had some experience with this they'd like to share. I'm thinking somewhere in the $7k - 15k range. The post important things are write speed to the disk and good linux driver support for the raid card. Can anyone recommend a specific raid card / server vendor? Instead of (or in addition to) RAID configurations, you may want to consider using multiple disks, connected to independent channels (not a problem if you're talking SATA), such that PG can perform simultaneous access to the various filesystems. I'm not sure which parts are critical, or how many different partitions you would need for optimal performance, but the PG Performance mailing list archives should prove useful to find out about this. Maybe you could use SATA Raptor drives -- the 10k RPM, which I believe has a sustained transfer rate in the order of 80 or 90 MB/sec. Do make sure that you get a MB with fast internal bus (533 or 800 MB/sec at least), so that you don't waste your money on multiple independently-connected hard drives that hit a bottleneck when the data reaches the motherboard. And don't even bother to show up again if you were planning to put less than 4GB of memory!! :-) HTH, Carlos -- ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] E-mail harvesting on PG lists?
This is worrisome... I decided to create a separate account for my subscription to PG's mailing lists (to avoid all replies bouncing back due to my strict whitelist anti-spam filter) -- I created the account on Dec 22, and today I notice a phishing e-mail (Your PayPal account), meaning that it took less than two weeks for my e-mail address to go from PG's mailing list to a spammers' database of addresses... Needless to say that I have not used this e-mail address (but really, really really 100% absolute certainty that I have not used it in any single instance), other than to post a couple messages in here. This is truly worrisome... I wonder if spammers today are basically subscribing to mailing lists so that they receive the e-mails (seems like a very obvious trick), or if they're moving to the next level of decrypting the encrypted / anti-spam form of e-mail addresses (the way they're displayed on the mailing list web site) Any comments? If it is the first option above, then it feels like by definition there is absolutely nothing that can be done, now or ever :-( Carlos -- ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Login username char length
Mark Constable wrote: But obviously there is no point in the lost time for the conversion to postgresql if ultimately postgresql has a similar limitation. Ultimately, PostgreSQL has sooo many non-limitations with respect to MySQL that you would never find that it was lost time -- I don't even know where to begin; MySQL being such a crappy toy and enjoying such a comfortable margin of popularity in the OS world, it makes me so angry and frustrated... My personal advice is: UPgrade and never look back; you'll never regret it! Carlos -- ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] Why is create function bringing down the Backend server?
I'm trying to add additional functionality to the contrib/pgcrypto branch (at least for my own use, although ideally, I'd like to make whatever additions good enough as to be accepted as part of the PG distribution) Anyway, I wanted to add hash functions (SHA-1 is already there, so I'd like to add SHA-256 and SHA-512 at the very least, and maybe also, for completeness, SHA-224 and SHA-384). Anyway, I started with an implementation of SHA-1 that I already have (since it is easy to test/debug, as I only have to compare it with the already-existing sha1 function in pgcrypto). I got it to work nicely, and I tried several millions randomly- generated strings, and the result of my hash function matches the result of pgcrypto's sha1 function. The problem is, when I execute the SQL statement: create or replace function sha1 ; for the second time (i.e., after making modifications and recompiling), the *backend* crashes -- it then restarts automatically, and then I run again the create or replace statement, and it works now (and the function seems to work fine -- well, in its final version it does). I know the list of possible causes may be nearly infinite, so I put the modified file (I removed most of the other stuff from the original pgcrypto.c file, and left the pg_digest function, which is the one that computes hashes, and the one that I used as a model to create mine): http://www.mochima.com/tmp/pgcrypto.c I also modified the SQL script file to include my function; this (when I execute this script) is precisely the moment at which the PG backend crashes (well, it shuts down anyway): http://www.mochima.com/tmp/pgcrypto.sql Any ideas of what I'm doing wrong? BTW, I compiled with the provided Makefile, then copy the .so files to /usr/local/pgsql/lib directory, and ran /sbin/ldconfig (that directory is included in my /etc/ld.so.conf file). I'm running PG 7.4.9 on a Linux FC4 on a Dual-Core Athlon64 (kernel x86_64-smp). Thanks for any comments/feedback! (please by kind, as this is my first attempt ever at creating PG functions -- but please be tough! Don't hold back valuable feedback just because you don't want to hurt my baby feelings! :-)) Carlos -- ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Why is create function bringing down the Backend server?
Marko Kreen wrote: On 12/22/05, Carlos Moreno [EMAIL PROTECTED] wrote: The problem is, when I execute the SQL statement: create or replace function sha1 ; for the second time (i.e., after making modifications and recompiling), the *backend* crashes -- it then restarts automatically, and then I run again the create or replace statement, and it works now (and the function seems to work fine -- well, in its final version it does). You should see if there's something in server log. The only thing that does show does not seem to say much: LOG: server process (PID 12885) was terminated by signal 11 LOG: terminating any other active server processes LOG: all server processes terminated; reinitializing Not sure what the meaning of signal 11 is with PG (AFAIR, it's one of the SIGUSER values, right?) And then indeed, try to gdb it. You can run Postgres in non-daemon mode with command 'postgres -D datadir database'. The stripped pgcrypto.c you posted - your wrapper function looks fine, only problem I see is that you deleted function find_provider that is used by pg_digest, so there will be undefined function in final .so. Oh no!! That was only in the function I posted, so that the file is kept as short as possible -- in the one that I compiled, I left everything untouched, and only added my functions. But that should not crash the server, so gdb trace could be still useful. Ok, will try to do it and post any interesting discoveries (I can't find any core files, so I guess I'll have to try gdbing it) Anyway, I wanted to add hash functions (SHA-1 is already there, so I'd like to add SHA-256 and SHA-512 at the very least, and maybe also, for completeness, SHA-224 and SHA-384). For SHA2 hashes it should be enough to compile pgcrypto against OpenSSL 0.9.8. Or upgrade to PostgreSQL 8.1, where they are included. Ofcourse, that is no fun. Hahahaha -- why do I keep being naive and making the same mistake over and over!!! :-) As much as it is indeed no fun, it is also good to know (and I didn't know that OpenSSL 0.9.8 had them either, so thanks for the double pointer!) If you want to hack, you could try adding SHA224 to the SHA2 implementation in 8.1. Sounds like a plan :-) Thanks, Carlos -- ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Encoding-related errors when moving from 7.3 to 8.0.1
[...] This makes sense to me, yes. The reason why I'm a bit lost is that we never did anything whatsoever with respect to encoding. Oddly enough, I couldn't find much about this in the docs. [...] I guess what changed from version 7.4.x to 8.0 is that the default server_encoding changed? Some further digging through the documentation reveals (in addition to the fact that I seem to be an incompetent when it comes to search through documentation :-)) that the above is indeed the case. Each database apparently has its own encoding, that can be specified with either the createdb shell command (switch -e or --encoding, IIRC), or with the SQL command create database. With version 7.4.x, when I execute the command: show server_encoding; I get SQL_ASCII as response. Also, when I do: select * from pg_database; I get a bunch of columns where the encoding field contains 0 for all of them (I never specified encoding when creating databases, so in all those cases it took the default). With 8.0.1, the command show server_encoding returns UNICODE as result -- this is also the default, since I created the database without explicitly indicating encoding. Apparently, conversion from latin1 to UNICODE works? (or at least doesn't trigger an error -- I wonder if some characters appear now incorrectly? I'll check that). Yes, I know I'm blah-blah-ing and answering my own questions :-) I'm just posting the progress I've made in case someone else was also interested in finding out more about the problem or possible solutions. Thanks, Carlos -- ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Encoding-related errors when moving from 7.3 to 8.0.1
Thanks again, Alvaro! Alvaro Herrera wrote: So, our system (CGI's written in C++ running on a Linux server) simply takes whatever the user gives (properly validated and escaped) and throws it in the database. We've never encountered any problem (well, or perhaps it's the opposite? Perhaps we've always been living with the problem without realizing it?) The latter, I think. The problem is character recoding. If your old system has been running with encoding SQL_ASCII, then no recoding ever takes place. If you are now using UTF8 or latin1 (say) as server encoding, then as soon as the client is using a different encoding, there should be conversion in order to make the new data correct w.r.t. the server encoding. If the wrong conversion takes place, or if no conversion takes place, you may either end up with invalid data, or have the server reject your input (as was this case.) This makes sense to me, yes. The reason why I'm a bit lost is that we never did anything whatsoever with respect to encoding. Oddly enough, I couldn't find much about this in the docs. I see references to it in the runtime configuration docs (the part where they describe the postgres.conf file). There's one line, commented out, where they set (as an example), the client_encoding to sql_ascii, and a comment to the end of that line says actually, it defaults to the server encoding). I just found out that in the create database statement, one of the options specifies the encoding using for that database. I guess what changed from version 7.4.x to 8.0 is that the default server_encoding changed? This means that a temporary solution (or rather, a temporary patch) would be to create the database specifying the right server_encoding to match what I currently have on my system? (I wouldn't want to do that if it is nothing more than a patch to keep hiding the problem) So the moral of the story seems to be that yes, you need to make each application issue the correct client_encoding before entering any data. You can attach it to the user or database, by issuing ALTER USER (resp. DATABASE). But if you are using a web interface, where the user can enter data in either win1252 or latin1 encoding (or whatever) depending on the environment, then I'm not sure what you should do. This is indeed the case; and do nothing is what we have always done with respect to this issue... Why has it been so long without us realizing that there was a hidden problem, I really don't know. (and we do have users with plenty of weird characters -- accent aigue, grave, circumflex, tilde, dieresis, etc. -- and they have always worked). I'm so lost! :-( BTW, the correct e-mail to pass through the anti-spam filter is my first name, followed by a dot, followed by my last name (the rest after the @ is the same) Thanks again for your message! Carlos -- ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Encoding-related errors when moving from 7.3 to 8.0.1
Hi Alvaro, thanks for your reply! Alvaro Herrera wrote: psql:db_backup.sql:1548: ERROR: invalid byte sequence for encoding UNICODE: 0xe12020 CONTEXT: COPY country, line 5, column namespanish: Canad? Hmm. The sequence looks like latin1 interpreted as utf8. This seems the inverse of the problem reported (and solved) here http://archives.postgresql.org/pgsql-es-ayuda/2005-03/msg00491.php Maybe you should try sticking a SET client_encoding TO latin1; at the beggining of the dump file. One thing worries me, though. With all of the previous versions of postgresql (I think when we started to use it in our system, it was version 7.1), I have never worried about any encoding issues. Our users are mostly Spanish-speaking, and they register to our system via web-based interfaces; virtually 100% of them use Windows (and perhaps most of them Windows in Spanish, with a Spanish keyboard). So, our system (CGI's written in C++ running on a Linux server) simply takes whatever the user gives (properly validated and escaped) and throws it in the database. We've never encountered any problem (well, or perhaps it's the opposite? Perhaps we've always been living with the problem without realizing it?) I worry now that if I needed to put a set client_encoding statement to make the insert or COPY statements work, does that mean that I should modify each and every program that I have that interacts with the database, and add a set client encoding statement before whatever other statement(s) we execute? Or is this client_encoding setting something that gets attached to the database (or the tables) itself? Where can I find more documentation on these issues? I'd like to get a deeper understanding, to avoid any future problems. Why are you using CHAR(n) fields anyway? It should probably be better if you used VARCHAR(n) ... Una de esas cosas que pasan hasta en las mejores familias ;-) (I was also surprised when noticing the bunch of spaces at the end -- I would have thought that we were using varchars in fields like that one) Thanks again! Cheers, Carlos -- PS: I have a strict white-list anti-spam filter in place, which is why a direct e-mail would be rejected -- let me know if you want to write directly through e-mail, so that I can add you to the white list file. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Encoding-related errors when moving from 7.3 to 8.0.1
Hello, I'm trying to upgrade a system that is currently running version 7.4.5 to the new 8.0.1 I create a backup, using pg_dump, and I expect it to work when restoring it to 8.0.1. However, when I run: psql -U user -f backup.sql Whenever there is a field value that contains characters with accents (e.g., HTML aacute; , which would be the Alt-160 character when using the numeric keypad on Windows), I get an error about invalid UNICODE characters in the COPY statements. The error reads like: psql:db_backup.sql:1548: ERROR: invalid byte sequence for encoding UNICODE: 0xe12020 CONTEXT: COPY country, line 5, column namespanish: Canad? (that ? should be an a with a ' on top -- in HTML, it would be the aacute; character) I get this error with or without the line char_encoding=SQL_ASCII at the beginning of the pg_dump'ed file (I noticed it and removed it to see if that would fix the problem -- it didn't change the behaviour). I know this feels like it could be the dumbest question ever asked around here :-) But I can't figure out why it's happening and how to fix it (I mean, it's a backup creaetd by pg_dump -- it should be compatible with a psql restore, even if it is a different version). Any ideas? Thanks, Carlos -- ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Checking whether postgresql is running
Ennio-Sr wrote: [Possible duplicate: original sent to novice never got through! -;(] Hi all! Testing a script where I need to make sure that postgresql is running before passing a psql dbasename -c insert into ... instruction I faced this curious behaviour: This is the relevant content of the script: -- #!/bin/bash /usr/lib/postgresql/bin/pg_ctl status -D /var/lib/postgres/data /dev/null 21 rtn=$? if [ $rtn -ne 0 ]; then echo not running else echo ok ok fi Now, if I run the script as root, I get: ok ok (or, commenting the script if condition: pg_ctl: postmaster is running (pid: 18658) Command line was: /usr/lib/postgresql/bin/postmasteir) whereas, if I run the same script as ordinary user, the answer is: not running (or, commenting the if lines: pg_ctl: postmaster or postgres is not running) -- Everything is being tested on the same PC [running PG 7.2.1-2Woody5 under GNU/Linux, k. 2.2.22], root being on /dev/tty1, user on /dev/tty2, and postgresql not being stopped while switching from root to user :-). Could anybody throw some light on this issue? Though I don't read Perl at all, so I haven't the slightest idea about what's in the if, I think I can shed some light on the issue... When you execute pg_ctl -D /var/lib/postgres/data, the command fails -- you can not access the directory /var/lib/postgres/data to go and pickup the file postgres.pid (I think that's the filename) to check if that PID is running. So, without understanding that gibberish inside the if, I bet that it simply is returning some error code that is causing the not running part of the if to be executed. You have to be user postgres or superuser to be able to use pg_ctl to verify if postmaster is running. A loose check would be executing the command ps -C postgres and see if there's any output. Or, simply use the PG client library -- you will get an error message saying that postmaster is not running on the specified port. Carlos -- ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] How is this possible? (more on deadlocks)
Ok, now I'm really intrigued by what looks to me (possibly from a naive point of view) like a bug, or rather, a limitation on the implementation. I can't find a reasonable justification why the following would cause a deadlock: I run two instances of psql using the same DB on the same machine. On one of them, I run: create table A (id int primary key); create table B (id int primary key); create table AB ( A_id int references A(id), B_id int references B(id) ); Then I add a few records (all this from the same instance of psql): insert into A values (1); insert into A values (2); insert into B values (10); insert into B values (11); Ok, now, I try two concurrent transactions, by executing commands alternating from one psql instance to the other one: I'll prefix each line with 1: or 2: indicating which console I execute it on -- the commands were executing in the time sequence corresponding to the lines below: 1: begin; 2: begin; 1: insert into AB values (1,10); 2: insert into AB values (2,10); AT THIS POINT, CONSOLE 2 BLOCKS 1: insert into AB values (2,11); At this point, console 1 blocks for a second or two, and then I get an error, reporting that a deadlock was detected; then, console 2 unblocks. I can't see how it is justified that the above causes a deadlock. I do understand how the deadlock is happening: trans. 1 puts a lock on rows 1 of A and row 10 of B -- meaning, nobody touches these rows until I'm finished; then trans 2. locks row 2 of A, but is put on hold waiting to lock row 10 of B, since there is already a lock on it. When trans. A now tries to put a lock on row 2 of A, the deadlock happens. The thing is, why? Why is this a deadlock? When we look at the low-level details, sure; but when you look at the nature of what's happening at a conceptual level, a deadlock is not justified, IMHO: Trans. 1 doesn't really need to put a mutex type of lock around row 1 of A -- it simply needs to atomically flag the order: nobody delete or modify this row of table A... Another trans. that attempts to place the same order should not block -- it should succeed and return immediately and continue with the transaction; there is no conflict in the above example -- the first transaction does not want to allow anyone to mess with row 1 of A; the other transaction wants exactly the same, so it seems to me that the lock is more restrictive than it needs to be. I don't know about the internals of how transactions and locks and FK constraints are handled, but I'm analyzing it and describing what seems to be happening internally, based on the behaviour I observe. Any comments? Carlos -- ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] Is this legal SQL? Is it a good practice?
I just noticed that from a C or C++ program using libpq or libpq++, I can send *one* command that contains several SQL statements separated by semicolon. Something like: PgDatabase db ( ); const char * const sql = insert into blah (...); insert into blah (...); if (db.Exec (sql) == PGRES_COMMAND_OK) { cout Fine! endl; } And I verify the table, and all the inserts took place (and of course, the program outputs Fine!). But I'm wondering -- is this a PostgreSQL extension, or is it legal SQL? In particular, I'm wondering if it is a feature that in the future you might decide to eliminate for not being ANSI-SQL compliant. What happens if the first command is ok but the second one fails? I guess PgDatabase::Exec would return an error code, and PgDatabase::ErrorMessage would return the error message corresponding to the second statement (the one that failed). Am I correct in thinking this? Any reason why this should be avoided? (on the plus side, I think this might increase efficiency for transactions where one executes several insert or update statements). Thanks for any comments, Carlos -- ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Is this legal SQL? Is it a good practice?
Peter Eisentraut wrote: I just noticed that from a C or C++ program using libpq or libpq++, I can send *one* command that contains several SQL statements separated by semicolon. But I'm wondering -- is this a PostgreSQL extension, or is it legal SQL? The whole libpq API is made up out of thin air, so it's not conforming to any public standard. Oh, wait. Though you didn't say it explicitly, I guess you're implying that it is libpq the one that splits the thing into the individual SQL statements and then send each of those to the backend? (wrapped around a transaction?) In particular, I'm wondering if it is a feature that in the future you might decide to eliminate for not being ANSI-SQL compliant. Because of the above, that cannot be a reason for eliminating any interfaces. Well, I was under the impression that the backend would receive *one* command with *one* string that contains all the SQL's separated by semicolons. This is what I thought might be removed in future releases, if it is considered that it's an unnecessary extension, etc. Thanks, Carlos -- ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Odd behaviour -- Index scan vs. seq. scan
I can't find a reasonable explanation for this. I have a table game, with primary key gameid (an int). If I use a where involving gameid and , or , or =, or =, then I get a sequential scan. If I use =, then of course I get an Index scan. More surprising to me is the fact that using BETWEEN, I get an Index scan!! The following is a copy-n-paste of the EXPLAIN outputs (with the useless lines removed): explain delete from game where gameid = 1000; Index Scan using game_pkey on game (cost=0.00..3.14 rows=1 width=6) explain delete from game where gameid 1000; Seq Scan on game (cost=0.00..4779.50 rows=200420 width=6) explain delete from game where gameid between 1000 and 2000; Index Scan using game_pkey on game (cost=0.00..3.15 rows=1 width=6) How's that possible? Is it purposely done like this, or is it a bug? (BTW, Postgres version is 7.2.3) Thanks, Carlos -- ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] Re: I want more Money (the data type, of course! :-))
Stephan Szabo wrote: Use numeric with appropriate precision information. I'm assuming that when I use numeric specifying the number of decimals, there is no rounding error in the arithmetic and storage? (well, other than rounding on the decimals beyond the ones specified -- e.g., if I say numeric, 2 decimals, there will be rounding error only in the 3rd decimal and after the 3rd decimal?) If so, then it sounds better than using an 8-byte integer to keep the pennies, given that it is more a what-you-get-is-what-you-get thing than storing the pennies, which is really a what-you-get-is-not- what-you-get-until-you-divide-it-by-100 ;-) Thanks! Carlos -- ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
[GENERAL] I want more Money (the data type, of course! :-))
No, it's not spam! :-) I'm just wondering if there is a data type like Money, but with a (much much much) higher range -- checking the documentation, it would look like PG uses an int (32bits) to store the amount of cents -- but -21 million to +21 million is insufficient for accounting of a small company... :-( Float is out of the question, of course (if it is really stored as a float or double). Any suggestions? Thanks, Carlos -- ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster