[ADMIN] SELECT COUNT(*)... returns 0 ROWS
I have instaled Postgres 7.3.4 on RH 9, if I excecute: select count(*) from cj_tranh; count --- 0 (1 row) Why the result us CERO? the table have 1.400.000 rows! What is wrong? Anybody help please. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [ADMIN] SELECT COUNT(*)... returns 0 ROWS
Thank to all I was in the WRONG database. Sorry, and thanks! -Original Message- From: Tom Lane <[EMAIL PROTECTED]> To: Jeff <[EMAIL PROTECTED]> Cc: "PostgreSQL" <[EMAIL PROTECTED]>, [EMAIL PROTECTED] Date: Fri, 31 Oct 2003 16:27:31 -0500 Subject: Re: [ADMIN] SELECT COUNT(*)... returns 0 ROWS > Jeff <[EMAIL PROTECTED]> writes: > > "PostgreSQL" <[EMAIL PROTECTED]> wrote: > >> Why the result us CERO? the table have 1.400.000 rows! > > > 1. did you remember to load data? > > 2. did someone accidentally delete the data? > > 3. are you connected to the correct db (I've panic'd before but > realized > > I was on dev, not production!)? > > 4. sure that is the right table? > > I'm wondering about MVCC-related conditions, which adds a couple > more questions: > > 5. Did you actually commit the transaction that loaded the rows? > 6. Are you doing the SELECT COUNT(*) in a transaction that started >before the data-loading transaction committed? > > regards, tom lane > > ---(end of > broadcast)--- > TIP 1: subscribe and unsubscribe commands go to > [EMAIL PROTECTED] ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[ADMIN] "too many clients" and no cure
Hi, I experience some trouble with my PG server. I get the error message "too many clients". Ok, so I checked the manual and adjusted the max_connections and shared_buffers, but still the server denies access, if there are about 100 connections. I just recently increased the shared buffers from 1 to 5 and the system shared memory fom 128m to about 800m. Still the error occurs at 100 active connections. Here's some system info: OS is SuSE Linux 9.1 (kernel 2.6.7-040722) I have 1GB of RAM. PG is 7.4.2 Parameters in postgresql.conf: max_connections = 500 shared_buffers = 5 sort_mem = 2048 sysctl -a | grep shm kernel.shmmni = 134217728 kernel.shmall = 805306368 kernel.shmmax = 805306368 ipcs -ls -- Semaphore Limits max number of arrays = 128 max semaphores per array = 250 max semaphores system wide = 32000 max ops per semop call = 100 semaphore max value = 32767 Thanks, Tom ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [ADMIN] "too many clients" and no cure
> It sounds like you forgot to restart the postmaster after changing > postgresql.conf. This is one of the parameters that is frozen at server > start ... No, I did restart the postmaster. Actually more than once. I do not have to restart the server after changing kernel parameters with sysctl, do I? Do you have any other ideas? I'm having trouble since a week or so ... ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [ADMIN] "too many clients" and no cure
> In that case, maybe you edited the wrong copy of postgresql.conf? > Try checking "SHOW max_connections" to verify what the postmaster > thinks the value is. Hmm, it said it still had 100. Even though the postgresql.conf stayed unchanged, after yet another restart it works. Funny. > I don't think kernel limits would lead to "too many clients" --- you'd > be getting different sorts of failures if the problem were in that area. Thanks a lot ! Tom ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[ADMIN] Row Lock
I'm having a problem with some row locks where one user does the lock and another user try to do the same, but of course, the second one will be waiting the first user unlock it. I would like to know if there is a way to discover if that row is locked before I try to do the lock. I have been researching something like ROWID but I couldn't find anything useful. Thank's ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[ADMIN] Queriy results erratic on Linux
Hi My application runs fine on Windows. However when I copy the files to the Linux server some queries return no results or errors even though the records and tables exist! It is always the same records or tables that are not found! In case of a table the error is: function.pg-query: Query failed: ERROR: relation "sites" does not exist in Any idea what might cause the problem? The server configuration: PHP Version 5.0.4 PostgreSQL 8.1.4 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2.2 20030222 (Red Hat Linux 3.2.2-5) Thanks ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] PgAccess on OS X--was "tcl wish--simple configure
Russ, are you using the 8.4a version. I got this from ADC news. --- Apple is pleased to announce a native port of TK version 8.4a4 for Mac OS X10.1. TK is a rapid application GUI toolkit used by Tcl, Perl, and Python. The TK release allows script developers to run existing GUI applications with a native Aqua look and feel directly on Mac OS X v10.1. As part of Apple's ongoing commitment to Open Source, this port has been released under a BSD-style license. Apple's changes have been and will continue to be submitted to the main Tcl/Tk CVS repository at tcl.sourceforge.net. This initial port only supports Tk for use with Tcl; we are working on integrating this into the main branch of Tk, and from there we expect it to be integrated into the versions used with Perl and Python. If you'd like to help, please join the discussions on darwin- [EMAIL PROTECTED] Sincerely, Ernest Prabhakar Open Source Product Manager [EMAIL PROTECTED] ___ publicsource-announce mailing list [EMAIL PROTECTED] http://www.lists.apple.com/mailman/listinfo/publicsource-announce -Original Message- From: Russ McBride <[EMAIL PROTECTED]> To: [EMAIL PROTECTED], Peter Eisentraut <[EMAIL PROTECTED]> Date: Wed, 17 Oct 2001 17:36:54 -0700 Subject: [ADMIN] PgAccess on OS X--was "tcl wish--simple configure problem" > Thanks Peter, that got me a little further. > > Anyone out there gotten pgAcess working on Mac OS X? I haven't been > able to make my way through a configure using the new 8.4 Tk > snapshots that are up on the sourceforge site. I'm getting the > following error: > configure: error: file 'tkConfig.sh' is required for Tk > > > Russ > > > >Russ McBride writes: > > > > > What is the environment variable that PG looks for when it searches > > > for wish? > > > >WISH > > > >-- > >Peter Eisentraut [EMAIL PROTECTED] http://funkturm.homeip.net/~peter > > > > > >---(end of > broadcast)--- > >TIP 4: Don't 'kill -9' the postmaster > > > ---(end of > broadcast)--- > TIP 2: you can get off all lists at once with the unregister command > (send "unregister YourEmailAddressHere" to > [EMAIL PROTECTED]) ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [ADMIN] Permission Denied When i am Trying to take Backup
Could someone create a post that shows who(user) should own what. I have always let postgres own the pgsql directory and I see that it is recomended that root own it. Thanks, Ted -Original Message- From: Peter Eisentraut <[EMAIL PROTECTED]> To: Stefan Huber <[EMAIL PROTECTED]> Date: Thu, 11 Oct 2001 22:03:28 +0200 (CEST) Subject: Re: [ADMIN] Permission Denied When i am Trying to take Backup > Stefan Huber writes: > > > If you followed the installatino guide step by step, the postgres > > files/directories are owned by root, not by postgres. > > Which is a good idea. > > > I always do a chown -R postgres:daemon /usr/local/pgsql (or > > postgres:postgres) after installation. > > Which is a bad idea. > > The installation instructions were developed with some thought behind > them. > > -- > Peter Eisentraut [EMAIL PROTECTED] http://funkturm.homeip.net/~peter > > > ---(end of > broadcast)--- > TIP 3: if posting/reading through Usenet, please send an appropriate > subscribe-nomail command to [EMAIL PROTECTED] so that your > message can get through to the mailing list cleanly ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [ADMIN] Database access error after upgrade 7.1.2 -> 7.1.3
What happens when you update from your data dump (pg_dumpall) that you do to backup your data? You did do a pg_dumpall before you did the update? I don't think it is required to upgrade from 7.1.2 to 7.1.3. What happens when you run version 7.1.2 and access the data? /home2/pgsql /home2/pgsql-7.1.2/postmaster -D /home2/pgsql/data -i & This should run your old version of posgresql, then you could run pg_dump for the database ( /home2/pgsql /home2/pgsql-7.1.2/pg_dump **database here** > backup.out) and use this to populate your new setup. I see that from your startup you are not saving the output of postgres to a file so there is no way to look at the log. Ted -Original Message- From: Steve Frampton <[EMAIL PROTECTED]> To: <[EMAIL PROTECTED]> Date: Wed, 10 Oct 2001 04:42:19 -0400 (EDT) Subject: [ADMIN] Database access error after upgrade 7.1.2 -> 7.1.3 > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > Hello: > > I just moved a database server from 7.1.2 to 7.1.3. After looking in > the > ChangeLog and not seeing anything that would seem to indicate any DB > structural changes, I did the upgrade as follows: > > pg_ctl stop -D /home2/pgsql/data -w -m smart > mv /home2/pgsql /home2/pgsql-7.1.2 > mkdir /home2/pgsql && cd /home2/pgsql && cp -R /home2/pgsql/data && > chown postgres:postgres data > > (Built and installed 7.1.3) > > /home2/pgsql/bin/postmaster -D /home2/pgsql/data -i & > > No errors were reported, and I can access my user created databases. > However, if I try to connect to template0, I get the following error > message: > > psql: FATAL 1: Database "template0" is not currently accepting > connections > > (Again, user created databases, as well as template1, can be accessed > normally.) > > Is there a solution for this? If not, my boss is wondering if suicide > is > a viable alternative. > > Thanks... > > - ---< LINUX: The choice of a GNU generation. > >- > Steve Frampton <[EMAIL PROTECTED]> http://www.LinuxNinja.com > GNU Privacy Guard ID: D055EBC5 (see http://www.gnupg.org for details) > GNU-PG Fingerprint: EEFB F03D 29B6 07E8 AF73 EF6A 9A72 F1F5 D055 EBC5 > -BEGIN PGP SIGNATURE- > Version: GnuPG v1.0.0 (GNU/Linux) > Comment: For info see http://www.gnupg.org > > iD8DBQE7xAnvmnLx9dBV68URAgHvAJ4iowIin8sz4n6WhTtWcTHW dpHLZwCfYKb/ > UoXfLSLQO6FxYnzyKmPMdXI= > =HHAV > -END PGP SIGNATURE- > > > ---(end of > broadcast)--- > TIP 1: subscribe and unsubscribe commands go to > [EMAIL PROTECTED] ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [ADMIN] seq scan on indexed column
On Thu, 14 Mar 2002, Zhang, Anna wrote: > gtld_analysis=# explain SELECT NETBLOCK_START > gtld_analysis-# FROM GTLD_OWNER > gtld_analysis-# WHERE NETBLOCK_START = -2147483648; You might want to try the same query but with the constant integer enclosed in single quotes. I find that (at least for int8) this changes the behaviour wrt index usage -- most probably due to automatic typecasting in the postgresql SQL parser. Hope this helps... Tycho -- Tycho Fruru [EMAIL PROTECTED] Users' impressions of different operating systems, expressed as emoticons: Linux: :) Windows: XP ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [ADMIN] connections
On Fri, 5 Apr 2002, jaya wrote: > Hi, The documentation on MAX_CONNECTIONS mentions default of 32 > connections and a compiled in hard upper limit of 1024 connections. It > also mentions that it can be altered when compiling the server. > > Does this mean that the upper limit can be increased to more than 1024 > at the time of server startup? Guess so, if the server (and the libraries which the server uses) have been configured and compiled to support more than 1024 concurrent connections. Why would you want more than 1024 simultaneous connections to the same database instance ? (Perhaps some connection pooling could help reduce your requirements) > what will be the typical number of concurrent users if the > MAX_CONNECTIONS is kept at 32 ? I'm not sure what you mean, but I'd suppose it would be between 0 and 32... Cheers Tycho -- Tycho Fruru [EMAIL PROTECTED] Users' impressions of different operating systems, expressed as emoticons: Linux: :) Windows: XP ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[ADMIN] 7.2.x on osx
I am able to compile and install 7.3(b1 and b2) on osx 10.2.1 without problem. However, I can not get 7.2.x to compile. Is there a fix? Ted ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[ADMIN] Advice on multi-machine high-reliability setup?
Hi, I've done some work with databases, but nothing extreme. I just got assigned the setting up of what I'd call a "high-reliability" site, as opposed to "high-availability" -- which I'd like too, of course :-) I've got some ideas on how to acheive my goal, but I fear I'm not quite up-to-date on the possibilities of replication and such, and I haven't found much in-depth documentation on the kind of setup I'd like (pointers anyone?), so I'd appreciate it if you could give my ideas the once-over and tell me if I'm missing something. Requirements: = Basic model is that of a reference database; I am a repository for data which is constantly consulted and regularly updated. OK, that pretty much describes any database :-) Anyway, a lot more queries than updates. I'll have one or more webservers using PHP to insert into the database. I want them to be limited to queries and inserts so that bugs in the PHP scripts can only insert bad data and not remove good data. I prefer to have programs that modify the central data to be server-side logic, and not farm out "UPDATE"s to PHP scripts. I want to keep a record of all updates (made through client scripts) for some time, and I do want to be able to query the updates, which I suppose eliminates simple logging. I need a ticket number for each update (I'm avoiding the term "transaction"), and the capability to see when and how the update was fulfilled, which updates were made in a given time-frame, etc. Consulting has to be immediate, while updates don't have to go through instantaneously, but I want it rather quick. One or two minutes is OK, five minutes start being a long time. If the client scripts say "there was an error, try again" it's not a severe problem. If they say "OK" and the data doesn't make it into the nightly backup, even once, that's a showstopper, and nobody will use my system. Least acceptable reason for losing data once acknowledged is the near-simultaneous catastrophic loss of hard disks on two separate machines. (Yes, software RAID -- or maybe even hardware if I get my hands on enough controllers -- will make that four disks lost simultaneously :-)). I think a good way would be for updating clients to write to one machine, and delay acknowledgement until the data is read from the second "slave" machine, possibly saying after some time "there seems to be a delay, please recheck in a few minutes to see if your update went through". Does that sound feasible? I need to be able to exchange machines. I fully expect my crummy hardware to break, I know that sooner or later I'll want to upgrade the OS in ways that require a reboot, or upgrade crummy hardware, and I don't want the downtime. I'm working on the premise that having multiple cheap Linux boxes (I've got lots of layoffs) is better than one or two really big expensive servers (no money for that anyway). I want to be able to take a new machine, install it to specifications (or restore one from backup !), switch it on, bring it up to date with current database, and let it take over as a hot backup and/or primary server. Or as a web server, of course. I don't have much of an idea on data size or update frequency, which is one of the reasons I want to be able to plug in new machines seamlessly; if load gets high, it means I'm popular, and I'll be able to get better hardware :-) My ideas: = I'm thinking that the PHP scripts could write updates to a "write-only" table/database/server. I suppose there is a simple way to make an insert into an auto-increment table and get in return the unique number to use as a ticket. I'd use the newly released replication to have the updates and the data on a query-only server, an update only being acknowleged to the user when it's registered on the query-only server. Once the update is registered, I use some kind of script to convert "ticket=a, time=xx:xx who=Alice name=XXX newvalue=YYY, done=NULL" into "UPDATE data SET value=YYY, lastchange=a where name=XXX; UPDATE tickets set done=time where ticket=a;". Stored Procedure triggered on ticket creation? I've never used them . . . do they work across machines? That is, what would be the best way to have update tables on one machine and data tables on another? If I had that, changing the data master would be transparent to users, who'd just notice a five-minute delay before the update went through instead of 30 (?) seconds. It would be cool to use transactions, but I don't think one transaction can refer to two databases on two machines (yet)? This should also enable me to restore a backup of the primary data (in case of loss of the primary data), apply the updates since the backup was made, and end-up with an up-to date system. Hmm. Is this necessary if I have replication . . . is replication necessary if I have this? My doubts: == If I do manage to put updates and data on two different servers, would it be possible to make a transaction on the dat
[ADMIN] Where's the list archived?
Where is the list archived? I did not see a ling to an archive on the postgreSQL home page. I would like to have a look there before bothering y'all with my questions. TIA Sincerely, Jimmie Farmer -_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_- Eskimo North Shell Access, Web Hosting, 56K Internet Access, Two-Week Trial! See our web site: http://www.eskimo.com (206) 361-1161 or (800) 246-6874.
[ADMIN] Problems compiling postgresql-6.4.2 for Sparc-Linux
I can't seem to get this to compile... here is where the trouble starts: In file included from /usr/include/sys/sem.h:8, from proc.c:71: /usr/include/asm/bitops.h:23: warning: no previous prototype for `set_bit' /usr/include/asm/bitops.h:43: warning: no previous prototype for `clear_bit' /usr/include/asm/bitops.h:63: warning: no previous prototype for `change_bit' /usr/include/asm/bitops.h:210: warning: no previous prototype for `test_bit' /usr/include/asm/bitops.h:216: warning: no previous prototype for `ffz' /usr/include/asm/bitops.h:232: warning: no previous prototype for `find_next_zero_bit' /usr/include/asm/bitops.h:277: warning: no previous prototype for `__ext2_set_bit' /usr/include/asm/bitops.h:297: warning: no previous prototype for `__ext2_clear_bit' /usr/include/asm/bitops.h:395: warning: no previous prototype for `__ext2_test_bit' /usr/include/asm/bitops.h:405: warning: no previous prototype for `__swab16' /usr/include/asm/bitops.h:410: warning: no previous prototype for `__swab32' /usr/include/asm/bitops.h:419: warning: no previous prototype for `__ext2_find_next_zero_bit' gcc -I../../../include -I../../../backend -O2 -Wall -Wmissing-prototypes -I../.. -c single.c -o single.o ld -r -o SUBSYS.o lmgr.o lock.o multi.o proc.o single.o make[3]: Leaving directory `/u/u7/p/postgres/postgresql-6.4.2/src/backend/storage/lmgr' make[3]: Entering directory `/u/u7/p/postgres/postgresql-6.4.2/src/backend/storage/page' gcc -I../../../include -I../../../backend -O2 -Wall -Wmissing-prototypes -I../.. -c bufpage.c -o bufpage.o gcc -I../../../include -I../../../backend -O2 -Wall -Wmissing-prototypes -I../.. -c itemptr.c -o itemptr.o ld -r -o SUBSYS.o bufpage.o itemptr.o make[3]: Leaving directory `/u/u7/p/postgres/postgresql-6.4.2/src/backend/storage/page' make[3]: Entering directory `/u/u7/p/postgres/postgresql-6.4.2/src/backend/storage/smgr' gcc -I../../../include -I../../../backend -O2 -Wall -Wmissing-prototypes -I../.. -c md.c -o md.o gcc -I../../../include -I../../../backend -O2 -Wall -Wmissing-prototypes -I../.. -c mm.c -o mm.o gcc -I../../../include -I../../../backend -O2 -Wall -Wmissing-prototypes -I../.. -c smgr.c -o smgr.o gcc -I../../../include -I../../../backend -O2 -Wall -Wmissing-prototypes -I../.. -c smgrtype.c -o smgrtype.o ld -r -o SUBSYS.o md.o mm.o smgr.o smgrtype.o make[3]: Leaving directory `/u/u7/p/postgres/postgresql-6.4.2/src/backend/storage/smgr' for i in buffer file ipc large_object lmgr page smgr; do make -C $i file/SUBSYS.o; done make[3]: Entering directory `/u/u7/p/postgres/postgresql-6.4.2/src/backend/storage/buffer' make[3]: *** No rule to make target `file/SUBSYS.o'. Stop. make[3]: Leaving directory `/u/u7/p/postgres/postgresql-6.4.2/src/backend/storage/buffer' make[3]: Entering directory `/u/u7/p/postgres/postgresql-6.4.2/src/backend/storage/file' make[3]: *** No rule to make target `file/SUBSYS.o'. Stop. make[3]: Leaving directory `/u/u7/p/postgres/postgresql-6.4.2/src/backend/storage/file' make[3]: Entering directory `/u/u7/p/postgres/postgresql-6.4.2/src/backend/storage/ipc' make[3]: *** No rule to make target `file/SUBSYS.o'. Stop. make[3]: Leaving directory `/u/u7/p/postgres/postgresql-6.4.2/src/backend/storage/ipc' make[3]: Entering directory `/u/u7/p/postgres/postgresql-6.4.2/src/backend/storage/large_object' make[3]: *** No rule to make target `file/SUBSYS.o'. Stop. make[3]: Leaving directory `/u/u7/p/postgres/postgresql-6.4.2/src/backend/storage/large_object' make[3]: Entering directory `/u/u7/p/postgres/postgresql-6.4.2/src/backend/storage/lmgr' make[3]: *** No rule to make target `file/SUBSYS.o'. Stop. make[3]: Leaving directory `/u/u7/p/postgres/postgresql-6.4.2/src/backend/storage/lmgr' make[3]: Entering directory `/u/u7/p/postgres/postgresql-6.4.2/src/backend/storage/page' make[3]: *** No rule to make target `file/SUBSYS.o'. Stop. make[3]: Leaving directory `/u/u7/p/postgres/postgresql-6.4.2/src/backend/storage/page' make[3]: Entering directory `/u/u7/p/postgres/postgresql-6.4.2/src/backend/storage/smgr' make[3]: *** No rule to make target `file/SUBSYS.o'. Stop. make[3]: Leaving directory `/u/u7/p/postgres/postgresql-6.4.2/src/backend/storage/smgr' make[2]: *** [file/SUBSYS.o] Error 2 make[2]: Leaving directory `/u/u7/p/postgres/postgresql-6.4.2/src/backend/storage' make[1]: *** [storage.dir] Error 2 make[1]: Leaving directory `/u/u7/p/postgres/postgresql-6.4.2/src/backend' make: *** [all] Error 2 This is on Red Hat 4.1 Sparc-Linux, kernel 2.0.35, using egcs-1.0.2. Thanks for any pointers you may have for me! Sincerely, Jimmie Farmer -_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_- Eskimo North Shell Access, Web Hosting, 56K Internet Access, Two-Week Trial! See our web site: http://www.eskimo.com (206) 361-1161 or (800) 246-6874.
[ADMIN] Compilation errors under Sparc-Linux SOLVED
Here is what I had to do to get this to compile on my Sun running Red Hat 4.1 Sparc-Linux: in the file src/backend/storage/file/fd.c: #ifndef HAVE_SYSCONF no_files = (long) NOFILE; #else /* no_files = sysconf(_SC_OPEN_MAX); */ no_files = getdtablesize(); if (no_files == -1) { elog(DEBUG, "pg_nofile: Unable to get _SC_OPEN_MAX using sysconf() using (%d)", NOFILE); no_files = (long) NOFILE; } #endif Note that I had to use "getdtablesize()", instead of "sysconf(_SC_OPEN_MAX)". It compiled with no errors after that change. Sincerely, Jimmie Farmer -_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_- Eskimo North Shell Access, Web Hosting, 56K Internet Access, Two-Week Trial! See our web site: http://www.eskimo.com (206) 361-1161 or (800) 246-6874.
Re: [ADMIN] Triggers on postgres
hello!! If you already have created the triggers functions, the correct description of this is in the file: /postgres/postgresql-6.4.2/contrib/spi/REAME Depending on the type of trigger or integrity to applY , you must create functions only on the administrator account (. To create triggers in other database, in other account: First, you must give priviledges on the tables where the triggers will be create. So, you can do it with the sentence CREATE TRIGGER note: before create triggers, you must create the indexes of database, because the integrity is supported on.! PS: mY ENGLISH IN NOT GOOD! SORRY! try it! if experience any problem , tell us!! Liliana vicente&o Loya Universidad La Salle Mex.
[ADMIN] Re: Graphics GUI for PostgreSQL
Or one can use webmin to manage. But I haven't figured out how. The problem is when I enter PostgreSQL server, it rejects me and says "Webmin needs to know your PostgreSQL administration login and password in order to manage your database." Does anyone know how to setup "PostgreSQL administration login and password in Webmin". Thank you so much. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[ADMIN] UTF8 characters
I had my database set to SQL_ASCII and switched to UTF8, but now I notice that I must add a slash for periods/dots ( \. vs . ) to insert into varchar. Is this normal? Thanks, J ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[ADMIN] pgmemcache
As anyone used pgmemcache? I would like to look more into a but I'm having problems installing the sql. I'm on OS X 10.4 and the sql there are lines causing errors: (e.g. AS '$libdir/pgmemcache', 'memcache_server_add' LANGUAGE 'C' STRICT;) thanks for any input and also will version 1.2 come out of beta? I'm looking to implement it @ work and they are not happy about using beta releases. Thanks, J ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] deinstallation - reinstallation on Mac OS 10.4
run this in the command line locate postmaster.pid or find / -name "postmaster.pid" -print that should locate any postmaster files and then remove whatever looks like the postmaster.pif file ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [ADMIN] deinstallation - reinstallation on Mac OS 10.4
I hate to sound like a --- but did you read the README on starting the server? You start the server by using the postmaster command: /path/to/postmaster -D /path/to/data look at the postgresql manual - it's all there. J ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] how to create a limited user
Did you even look for the information on postgresql.org? http://www.postgresql.org/docs/8.2/interactive/sql-createuser.html ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[ADMIN] Unclosed connections
We are using this bad piece of the software that does not close connections to the postgres server. Is there some setting for closing dead connections? And not TCP/IP keep alive does not work. ---(end of broadcast)--- TIP 6: explain analyze is your friend
[ADMIN] Mixing an SSD and Spinning drive for best performance
Hello- I have a large Postgres database on Ubuntu 12 (+100GB). I have an old, SATA-based, spinning drive and a new SSD drive. I mainly do data analysis on Postgres (big, ugly select statements). I see a few options: 1. SSD for data and OS. Spinning for temp space. 2. SSD for data. Spinning for OS and temp space. 3. SSD for OS. Spinning for data and temp space. I'm thinking that option #2 is the best. But, I wonder if moving the OS on the SSD will give me a boost. I'm not too concerned about boot up time or OS responsiveness as much as getting more speed out of my DB queries. I have plenty of space on both drives for the data and OS with room to spare. Which configuration would you recommend? Any other suggestions to get more performance?
[ADMIN] Problems with upgrading 6.3.1 to 6.4
Hello! I've got a problem with converting my old database from version 6.3.1 to 6.4. I tried pg_dumpall, but in the 6.4 psql cannot import it. It crashes with error. Maybe someone knows any other conversion utility? I've got a big database (pg_dumpall result is about 50MB), I am running linux 2.0.36. Regards, - Grych
[ADMIN] Proposal for restoring a dump into a database with a different owner
Hi, I have the same problem as Andreas Haumer did in this thread: http://archives.postgresql.org/pgsql-admin/2008-01/msg00128.php -- I want to be able to easily (i.e. programmatically) copy a database from one place to another, changing the owners of all contained objects in the process. While I very much appreciate Tom Lane's fast and helpful responses to Andreas on that thread, it doesn't quite address my problem: there is no simple, automatable 1- or 2-step process that can accomplish this (without Andreas's (admittedly neat) trick of temporarily changing the destination user to superuser status). The best I've been able to do is hack up a Perl script that parses the output of pg_restore -l, directing superuser-requiring operations to one file and non-superuser-requiring operations to another; but afterwards the superuser-requiring operations still have to have the owners of the objects they produce manually reassigned. My instincts (which could be wrong...) tell me that this is actually a fairly common problem. So, I suggest the following enhancement to pg_restore: add a --map-users command-line option that accepts the name of a file containing two usernames on each line, and . Then (provided -O was not specified) when producing ALTER ... OWNER TO commands, simply replace every user listed in this file with the corresponding user. Another niggle is that the COMMENT ON DATABASE command, produced by pg_restore when run without the -d option, always refers to the name of the original database, which will cause an error if the new DB has a different name. It would be nice to have an option (or other means) to remedy this. It seems to me that these things would be pretty simple to implement and sufficiently general to tackle this problem neatly, without opening up any security holes (you would still need to be *some* DB superuser for the ALTER ... OWNER TO commands to work). Does this sound sensible? If Tom or another high-ranking PostgreSQLer okays it in principle, I suppose I could try developing a patch for pg_restore myself. (Never done this before but there's a first time for everything...) TIA, Tim White -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin