Re: [ADMIN] Slony1 or DRBD for replication ?
SLONY should be the choice :)On 4/14/06, Pierre LEBRECH <[EMAIL PROTECTED]> wrote: Hello,I want to replicate my PostgreSQL database at an other location. Thedistance between the two locations should be around 10 miles. The link should be a fast ethernet dedicated link.What would you suggest me to do? DRBD or slony1 for PostgreSQL replication?Thank you.---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] tsearch2 installation problem
I just tried it going to contrib/tsearch2 folder and then did the following:makemake installIt created a tsearch2.so file in the lib directory of pgsql which is used while registering the tsearch2 functions. Make sure you have that .so file created under the lib folder of pgsql. Hope this helps.../Shoaiband then it createOn 17 Apr 2006 01:16:09 -0700, Danish < [EMAIL PROTECTED]> wrote:Hi Everyone,I have installed postgresql 8.1.3.tar.gz . After compiling andinstalling the postgresql is running fine but Im not able to installtsearch2. I went under the postgresql source directory/usr/local/postgresql8.1.3/contrib and then did a#gmake all #gmake installAfter issuing these commands, a contrib directory was created under/usr/local/pgsql/shareGoing under the /usr/local/pgsql/share/contrib directory i ran thecommand as user postgres $psql -d dbname -f tsearch2.sqlSETBEGINpsql:tsearch2.sql:13: NOTICE: CREATE TABLE / PRIMARY KEY will createimplicit index "pg_ts_dict_pkey" for table "pg_ts_dict"CREATE TABLE psql:tsearch2.sql:20: ERROR: could not access file "$libdir/tsearch2":No suchfile or directorypsql:tsearch2.sql:26: ERROR: current transaction is aborted, commandsignored until end of transaction block then i got the above errors..Can anyone please help me out as to what is going wrongThanksDanish---(end of broadcast)---TIP 2: Don't 'kill -9' the postmaster
Re: [ADMIN] Migration from oracle
ECPG can be a good replcement for Pro*C You can see the detailed documentation for that at http://www.postgresql.org/docs/8.1/interactive/ecpg.html Hope this helps.../ShoaibOn 4/19/06, koji osada <[EMAIL PROTECTED]> wrote: I'm planning to migrate database from Oracle to PostgreSQL.I think there are some challenges in migrating Pro*C/COBOL, binary data(especially move to different endian platform), stored procedures.I think that I can apply swissql to PL/SQL, but I don't have any idea tomigrate Pro*C/COBOL without doing hands on changes.1. Is there any good solution? 2. Can I use ecpg instead of Pro*C? How can I migrate?regards,kojio---(end of broadcast)---TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [ADMIN] Modifying the encoding of a database
I dont think so you can modify the database encoding ... solution would be recreate it with the changed encoding /ShoaibOn 5/2/06, Rodrigo Sakai <[EMAIL PROTECTED]> wrote: Hi, Is it possible to modify the encoding of a production database? Or the only way is to create a new database with the correct enconding??? Thanks!
Re: [ADMIN] Authentication troubleshooting using psql
Try using\c dbnameand if you want to get rid of the asking password thing you can change your authentication method from 'md5' to 'trust' in your pg_hba.conf file and then reload the settings using "pg_ctl reload" /ShoaibOn 5/23/06, Tahir Tamba <[EMAIL PROTECTED]> wrote: Hi Guys,I install Postgres 8.1.3 on WinXP Pro OS using msi.exe. Using Postgres with PGAdmin III, everything works well. But when I'm trying to use the psql terminal interface to connect into my database psql ask me to enter my password and when trying to enter my password, nothing happens from my keyboard. Exemple : postgres=# \connectdb postgresPassword for postgres user: unable to type my password cursor doesn't workAnd when I press Enter tab I get the following error:"FATAL: password authentication failed for user Postgres who can help me to solve thisThankTahir Tamba---(end of broadcast)---TIP 2: Don't 'kill -9' the postmaster
Re: Re : Re: [ADMIN] Authentication troubleshooting using psql
The way you are reloading is not the right way to reload the configurations. As you are on Windows and have used the installer so a right way to reload your configuration after you have made the change to pg_hba.conf file will be to: Go to Start --> Programs --> PostgreSQL --> Reload ConfigurationNow for connecting to a database from the SQL prompt use \c dbnameI hope this helps but do try the latest installer as mentioned in the last email for 8.1.4 as this has a few things fixed which can help you./ShoaibOn 5/24/06, Tahir Tamba <[EMAIL PROTECTED] > wrote: Dear Mir, Thank you for answer, I have edited the pg_hba file by changing the method .md5" to "trust". So I've tried the reload command in psql prompt : "postgres=#\pg_ctl reload" to update the file, I get the following message: "Query buffer is empty" and commands arguments are ignored. Thank again for your helpTahir Tamba De: Shoaib Mir <[EMAIL PROTECTED]> Date: Mardi, Mai 23, 2006 1:33 pm Objet: Re: [ADMIN] Authentication troubleshooting using psql > Try using > > \c dbname > > and if you want to get rid of the asking password thing you can > change your > authentication method from 'md5' to 'trust' in your pg_hba.conf > file and > then reload the settings using "pg_ctl reload" > > /Shoaib > > On 5/23/06, Tahir Tamba < [EMAIL PROTECTED]> wrote: > > > > Hi Guys, > > > > I install Postgres 8.1.3 on WinXP Pro OS using msi.exe. Using > Postgres> with PGAdmin III, everything works well. But when I'm > trying to use the psql > > terminal interface to connect into my database psql ask me to > enter my > > password and when trying to enter my password, nothing happens > from my > > keyboard. > > Exemple : postgres=# \connectdb postgres > > Password for postgres user: unable to type my password cursor > doesn't work > > And when I press Enter tab I get the following error: > > "FATAL: password authentication failed for user Postgres > > > > who can help me to solve this > > Thank > > > > > > > > Tahir Tamba > > > > > > > > > > > > ---(end of broadcast) > --- > > TIP 2: Don't 'kill -9' the postmaster > > > > > > > > >
Re: [ADMIN] Task/Jobs issue
HI Michael,You can try using an addon like pgjobs (http://gborg.postgresql.org/project/pgjobs/projdisplay.php)Thanks,Shoaib Mir EnterpriseDBOn 7/21/06, Michael Grossman <[EMAIL PROTECTED]> wrote: Hi! I am using PostgreSQL 8.1.3 Windows version and pgAdmin 1.4.3 I cannot find Jobs section for schedule tasks? Please assist Best regards, Michael Grossman New Concept Travel Technologies Ltd Mobile: (972) 54 555 7383 Mobile: (972) 50 4284 555 Tel : (972) 8 971 6890 Fax : (972) 8 926 8049 Skype: Michael.grossman Email: [EMAIL PROTECTED] URL : http://www.newconcept.co.il
Re: [ADMIN] problem backup/restore PSQL DB
You can also try using 'pg_dumpall' (http://www.postgresql.org/docs/8.1/static/app-pg-dumpall.html) that will take the dump of all the users, groups and databases. Thanks,Shoaib MirEnterpriseDBOn 7/25/06, Paul S <[EMAIL PROTECTED]> wrote: It looks like it's a user configuration issue. I have hit user setup issueswhile restoring DB's before and generally I find it useful to create theUsers and/or Groups manually on the new server first before restoring the DB's whenever I hit problems like that. Try syncing the users manually,(assuming that there are only a few and that it could be done manually withsome level of ease) and then try restoring again.-Paul --View this message in context: http://www.nabble.com/problem-backup-restore-PSQL-DB-tf1995768.html#a5478199Sent from the PostgreSQL - admin forum at Nabble.com.---(end of broadcast)---TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [ADMIN] Checking what is the current query running
You have to enable "stats_command_string' in your postgresql.conf file:stats_command_string = onAfter that is done you can view queries running on database server like this:select current_query from pg_stat_activity; You can also get the query execution times and which queries were run, in the database server logs files by enabling the following in postgresql.conf file:log_duration = onlog_statement = 'all' Hope this helps...Thanks,Shoaib MirEnterpriseDBOn 7/25/06, Marco Bizzarri <[EMAIL PROTECTED] > wrote:Hi all.I've a postgresql where sometimes a process can remaing running a query for a very long time.Aside from logging the queries, is there a way to know which query isrunning in one particular moment?RegardsMarco--Marco Bizzarri http://notenotturne.blogspot.com/---(end of broadcast)---TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] problem backup/restore PSQL DB
Yes, that is possible to take the backup from one instance and restore that in another postgresql instance and for that purpose you use pg_dump (for taking full dump use pg_dumpall) and pg_restore utilities.Thanks, Shoaib MirEnterpriseDB (www.enterprisedb.com)On 7/25/06, JieJun Xu < [EMAIL PROTECTED]> wrote:Thanks for helping. My postgresql DB is actually part of the OME system which stores Image Metadata. The system overview is as following: http://www.openmicroscopy.org/system-overview/. So I am actually using the option provided by OME to backup/restore my system. http://www.openmicroscopy.org/system-admin/backup_restore.htmlPart of the backup code in perl is as following: print " \\_ Backing up postgress database ome\n"; my $dbConf = $environment->DB_conf(); my $dbName = 'ome'; $dbName = $dbConf->{Name} if $dbConf->{Name}; my $flags = ''; $flags .= '-h '.$dbConf->{Host}.' ' if $dbConf->{Host}; $flags .= '-p '.$dbConf->{Port}.' ' if $dbConf->{Port}; $flags .= '-U '.$dbConf->{User}.' ' if $dbConf->{User}; $flags .= '-Fc'; # -F (format). # -p: use the plain text SQL script file this should be the most portable # -c: custom archive suitable for input into pg_restore print STDERR "su $postgress_user -c '".$prog_path{'pg_dump'}." $flags -o $dbName > /tmp/omeDB_backup'\n"; # backup database and watch output from pg_dump foreach (`su $postgress_user -c '$prog_path{'pg_dump'} $flags -o $dbName > /tmp/omeDB_backup' 2>&1`) { print STDERR "\nDatabase Backup Failed: $_" and die if $_ =~ /pg_dump/ or $_ =~ /ERROR/ or $_ =~ /FATAL/; } # check the size of omeDB_backup if (stat("/tmp/omeDB_backup")->size < 1024) { print STDERR "\nDatabase Backup Failed: /tmp/omeDB_backup is less than 1024 bytes in size \n"; die; } # log version of backup open (FILEOUT, "> /tmp/OMEmaint") or die "Couldn't open OMEmaint for writing\n"; print FILEOUT "version=$dbAdmin_version\n"; close (FILEOUT);I will try modify the 'pg_dump' to 'pg_dumpall' and see if it works..But is it even possible to backup one system and restore it in another? I just don't want to go into a deadend. Many thanks!! JunOn 7/25/06, Shoaib Mir < [EMAIL PROTECTED]> wrote: You can also try using 'pg_dumpall' ( http://www.postgresql.org/docs/8.1/static/app-pg-dumpall.html) that will take the dump of all the users, groups and databases. Thanks,Shoaib MirEnterpriseDBOn 7/25/06, Paul S <[EMAIL PROTECTED]> wrote: It looks like it's a user configuration issue. I have hit user setup issueswhile restoring DB's before and generally I find it useful to create theUsers and/or Groups manually on the new server first before restoring the DB's whenever I hit problems like that. Try syncing the users manually,(assuming that there are only a few and that it could be done manually withsome level of ease) and then try restoring again.-Paul --View this message in context: http://www.nabble.com/problem-backup-restore-PSQL-DB-tf1995768.html#a5478199Sent from the PostgreSQL - admin forum at Nabble.com.---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [ADMIN] Connection refused. Check that the hostname and port are correct
That appears to be the JDBC driver is not able to connect to the PostgreSQL database.Try out the following:- See if there is any firewall running, run 'iptables -F" on the database server machine to disable it for a while and then test the same Java code - Check the routing tables info and see if there is anything missing there.- If the above two are okay and still its not connecting try out settng the following in your pg_hba.conf file:host all all 127.0.0.1/32 trust host all all 0.0.0.0/0 trustand then do a 'pg_ctl reload' and see if that now lets you connect through the JDBC driver or not locally and over the lan. Thanks,-- Shoaib MirEnterpriseDB (www.enterprisedb.com)On 7/20/06, Omer Mustafa < [EMAIL PROTECTED]> wrote:Hi, I am new to PostgreSQL. I am trying to connect to my PostgreSQL databaseinstalled on Unix. I am getting the following error message,*** An error occured while establishing the connection.Type: org.postgresql.util.PSQLException Error Code: 0 SQL State:08004Message:Connection refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.Exception: java.net.ConnectException: Connection refused: connect***I have checked port number, user name, password and they are all correct. I have also checked that the postmaster is also running.***postgres 25398 25344 0 11:57:51 pts/2 0:00 psql syslogpostgres 25548 25546 0 16:01:35 pts/3 0:00 /usr/local/pgsql/bin/postmasterpostgres 25613 25580 0 16:43:01 pts/3 0:00 grep postpostgres 25344 25301 0 10:57:05 pts/2 0:00 -tcshpostgres 25546 1 0 16:01:35 pts/3 0:00/usr/local/pgsql/bin/postmaster postgres 25550 25549 0 16:01:35 pts/3 0:00/usr/local/pgsql/bin/postmasterpostgres 25549 25546 0 16:01:35 pts/3 0:00/usr/local/pgsql/bin/postmasterpostgres 25580 25443 0 16:34:17 pts/3 0:00 -tcsh ***I have the following settings in my configuration file**hostsyslog sysloguser 10.10.10.10/32trust hostsyslog sysloguser 192.168.0.0/32trust*Please advice what am I doing wrong--Thanks and RegardsOmer MustafaOracle Application DBA [EMAIL PROTECTED]Internode, Level 1, 132 Grenfell StreetADELAIDE SA 5000Ph:08 8228 2379Mobile # 0421 038839---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [ADMIN] PostgreSQL hanging
Do you have any antivirus running on your system? if so, try disabling that. This problem can occur if some other process is holding a lock on that specific file and so as a result not able to rename the file. Thanks,-- Shoaib MirEnterpriseDB (www.enterprisedb.com)On 7/31/06, Benjamin Krajmalnik < [EMAIL PROTECTED]> wrote:I am runing PG 8.1.4/Windows.Over the weekend, I have had to restart the PostgreSQL service a couple of times after PG hangs.At the time this happens, the log is showing entries such as the onebelow2006-07-31 09:19:55 LOG: could not rename file"pg_xlog/0001020800E1" to"pg_xlog/0001020800F4", continuing to try Any ideas what may be causing this?My database is currently 7.7 GB.I have 54GB of drive space available, so it is not being caused by lackof disk space.My autovacuum parameters are as follows: #---# AUTOVACUUM PARAMETERS#---autovacuum = on # enable autovacuum subprocess? autovacuum_naptime = 20 # time between autovacuum runs, in secsautovacuum_vacuum_threshold = 100 # min # of tuple updates before# vacuumautovacuum_analyze_threshold = 100 # min # of tuple updates before # analyzeautovacuum_vacuum_scale_factor = 0.1# fraction of rel size before# vacuumautovacuum_analyze_scale_factor = 0.1 # fraction of rel size before # analyzeautovacuum_vacuum_cost_delay = 100 # default vacuum cost delay for# autovac, -1 means use# vacuum_cost_delay autovacuum_vacuum_cost_limit = 100 # default vacuum cost limit for# autovac, -1 means use# vacuum_cost_limitAny suggestions will be appreciated. ---(end of broadcast)---TIP 6: explain analyze is your friend
Re: [ADMIN] how to increase performance of Postgre
Have a look at http://www.powerpostgresql.com/PerfList/ and change your settings accordingly.Thanks,-- Shoaib MirEnterpriseDB ( www.enterprisedb.com)On 8/1/06, Javanesevn <[EMAIL PROTECTED]> wrote: Hi all,My project ( web application) is using Postgre 8.1 and PHP 4.x (ADOlibrary to access database). And I would like to increase perfomance, soI changed configuration follow this:shared_buffers = 16000(Default = 1000) wal_buffers = 64(Default = 8)But this setting is not strong effect. Please show me other way or youridea to resolve my problem. Thanks a lot.Best regards,-Javanesevn.---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [ADMIN] DB Synchronization.
You can use Slony (http://gborg.postgresql.org/project/slony1/projdisplay.php) for replication (data synchronization) and it also does offer the failover (not automatic) feature as well. You can also have a combination of Linux HA + Slony for high availablity.-- Shoaib MirEnterpriseDB (www.enterprisedb.com) On 8/26/06, nilesh khode <[EMAIL PROTECTED]> wrote: Hello All, I am having Postgresql 8 installed on 2 RH Linux box. I am keeping one server in stand-by mode. If one server is failed for any reason, the DB should be serving from another server. I am having one question about the synchronization. How to keep another server in updated mode? It should takes an updated db from first server and restore it on another server. Please help me out. Regards, Nilesh
Re: [ADMIN] Instructions For Building On Windows?
Please use MINGW for that purpose, haven't tried Visual Studio but MINGW does work on Windows without a problem.Thanks,-- Shoaib MirEnterpriseDB (www.enterprisedb.com )On 9/18/06, Jeanna Geier <[EMAIL PROTECTED]> wrote: Hello Again-I'm sorry to post this again; I was hoping someone would have responded tomy initial post over the weekend, but no such luck.I am looking for instructions for building Postgres on Windows using Visual C++ (I'm using 2005 Express Edition).If someone could please help me, I would be very grateful!!Thanks in advance!-Jeanna- Original Message -From: "Jeanna Geier" < [EMAIL PROTECTED]>To:Sent: Friday, September 15, 2006 3:11 PMSubject: Re: [ADMIN] Instructions For Building On Windows? >I should also tell you that I'm wanting to build version 8.1.4 of Postgres.>> Thanks for your help!!> -Jeanna>> - Original Message -> From: "Jeanna Geier" < [EMAIL PROTECTED]>> To: > Sent: Friday, September 15, 2006 2:47 PM> Subject: [ADMIN] Instructions For Building On Windows? Hi All- Newbie question that hopefully someone will be able to quickly help me>> with: I'm looking for instructions for building Postgres on Windows using >> Visual C++ (I'm using 2005 Express Edition). Is there anything available>> out there or can anyone provide me with some instructions? The install>> instructions provided with the source are for Unix and when searching the >> mailing list or googling this, I'm not really coming up with anything. Thanks much!>> -Jeanna ---(end of broadcast)--- >> TIP 4: Have you searched our list archives? http://archives.postgresql.org> ---(end of broadcast)--- > TIP 3: Have you checked our extensive FAQ?>> http://www.postgresql.org/docs/faq>---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [ADMIN] psql: FATAL: the database system is starting up
Try using the fast shutdown switch with pg_ctl i.e. -mf, that might help.Thanks,-- Shoaib MirEnterpriseDB (www.enterprisedb.com) On 9/21/06, Mr. Dan <[EMAIL PROTECTED]> wrote: Hi,We had to kill postmaster, and it's been ugly. The last backup finishedsuccessfully not last night, but the night before. This is 161 GB database. How long should I expect this to take before it starts. Developers are coming into my office every 5 minutes asking for updates. It's never takenthis long to come back. We did have an awful lot of transactions lastnight.Is the server replaying everythingfhttp://by116fd.bay116.hotmail.msn.com/cgi-bin/compose?&curmbox=%2d%2d%2d%2d0001&a=3b0ef010ec78a7ca825dcdc7953aaf4c22ff15917d66f2ef8a3ddf1fd5dbd659# Sendrom the last succesful backup? Is it hanging because it can't findsome WAL files that were archived? It's been an hour and a half so far.Would a reboot help?~DjKep 21 10:07:05 p postgres[28301]: [2-3] recovery. Sep 21 10:07:05 p postgres[28301]: [3-1] < % %2006-09-21 10:07:05 EDT>LOG:checkpoint record is at 14D/69FFBD8Sep 21 10:07:05 p postgres[28301]: [4-1] < % %2006-09-21 10:07:05 EDT>LOG:redo record is at 14D/696AC20; undo record is at 0/0; shutdown FALSE Sep 21 10:07:05 p postgres[28301]: [5-1] < % %2006-09-21 10:07:05 EDT>LOG:next transaction ID: 39405490; next OID: 1799955Sep 21 10:07:05 p postgres[28301]: [6-1] < % %2006-09-21 10:07:05 EDT>LOG: next MultiXactId: 5696; next MultiXactOffset: 11694Sep 21 10:07:05 p postgres[28301]: [7-1] < % %2006-09-21 10:07:05 EDT>LOG:database system was not properly shut down; automatic recovery in progressSep 21 10:07:05 p postgres[28301]: [8-1] < % %2006-09-21 10:07:05 EDT>LOG: redo starts at 14D/696AC20Sep 21 10:07:20 p postgres[28305]: [2-1] <[unknown] %[unknown] %2006-09-2110:07:20 EDT>LOG: invalid length of startup packet---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] Estimating a table size
http://www.postgresql.org/docs/8.1/static/diskusage.html can help you get this information.Thanks,-- Shoaib MirEnterpriseDB ( www.enterprisedb.com)On 9/21/06, Guido Barosio <[EMAIL PROTECTED]> wrote: there used to be a dbsize project, search for it.g.-On 9/21/06, Rodrigo Sakai <[EMAIL PROTECTED]> wrote:> Hello all, I need a tool or something like it, which can give me the estimated size> of a table considering all fields, constraints, indexes, etc... For example: >> CREATE TABLE products (>> product_id INT8,>> product_name VARCHAR(50),>> product_vendor VARCHAR(50),>> price NUMERIC(10,2), CONSTRAINT pk_products PRIMARY KEY (product_id),>> CONSTRAINT ck_price CHECK (price >= 0)>> );>> CREATE INDEX ix_products ON products (product_name); >> See that in the example above I have, some fields, some constraints and> one index. I want to know (estimate) how much fisical space (KB, MB) it will> consume! >> Thanks in advance!>>--Guido Barosio---http://www.globant.com [EMAIL PROTECTED]---(end of broadcast)---TIP 5: don't forget to increase your free space map settings
Re: [ADMIN] Postgres Logging doesnt work
I guess that might work:1. Set the following in postgresql.conf file --> log_rotation_age = 12. rename the already present log file from dbserver logs folder3. Now do --> pg_ctl reload (so that log_rotation_age change you did comes in affect) 4. Check the log folder after a while (after 1 minutes) and it will be having a new log file and doing all the logging there5. After it starts the logging you can revert the change done to rotation_age and do a reload again. Thanks,---Shoaib MirEnterpriseDB (www.enterprisedb.com)On 10/2/06, Pallav Kalva <[EMAIL PROTECTED]> wrote: Hi Andrew, Thanks! for the reply, it is a small file. the file size on it is735K that's pretty small. Not sure why it stopped writing to the logfile, is there any way to start logging without starting postgres ? Pallav.Andrew Sullivan wrote:> On Mon, Oct 02, 2006 at 11:04:11AM -0400, Pallav Kalva wrote:>>> Hi , My production database stopped writting to the postgres log files >> all of a sudden, does anybody know why ? How big is the file? log_rotation_age | 10080>> log_rotation_size | 0 You've set this up to rotate once every 10,080 minutes, no matter> what. If you've run into a file size limit, then you'll be out of> luck until the next file is opened, which should be on the same day > of the week the postmaster was last started. Just a guess.>> A>>---(end of broadcast)---TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] 100% failover + replication solution
There is this project which actually is not released yet, but something that you want to achieve :)http://pgfoundry.org/projects/pgpitrhaRegards,--- Shoaib MirEnterpriseDB (www.enterprisedb.com)On 10/30/06, Ben Suffolk < [EMAIL PROTECTED]> wrote:> Guys,>> I have been thinking about this and wanted to see if it can be > achived. I wanted to make a 100% failover solution for my postgres> databases. The first thing that comes to my mind is doing it using> WAL logs. Am attaching the diagram for which i will write more here. While its not the solution you were looking at, have you seenPGCluser :-http://pgcluster.projects.postgresql.org/index.htmlI have not tried it, but was looking the other week at various fail- over type solutions and came across it. It seems to be able to dowhat you want.Ben---(end of broadcast)---TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] 100% failover + replication solution
Hi Moiz,If I had to choose for your case where you want to direct your selects to slave node and inserts/updates on master, I would have opted for Slony or PGCluster. Using PITR for HA can be a good option if you want to switch between primary and secondary server, where you can store the archive files on a shared disk and place a recovery file with in $PGDATA and automate the process where it can run the process of recovery on each primary and seconday like for example 5 times a day as it all depends on the number of transactions happening on the db server. I have seen a few users doing this for routine VACUUM FULL process as a maintanence activity. Thanks,-Shoaib MirEnterpriseDB (www.enterprisedb.com)On 10/30/06, Moiz Kothari < [EMAIL PROTECTED]> wrote:Shoaib, It sure does, i saw PGCLUSTER, but 3 reasons for having a postgres specific solution.1) If pgcluster stops further development, it would be lot more hassel when upgrading to a different version of postgres. 2) Postgres specific solution would help alot going ahead in future. 3) Also architecture of pgcluster might make things slower as it updates complete cluster before confirming the request.There are lots of them available in market, but i think WAL solution should be available, if not then the thought process should be there going ahead. I am expecting a solution out of WAL logs. Let me know if you have any thoughts about it. Regards,Moiz KothariOn 10/30/06, Shoaib Mir < [EMAIL PROTECTED]> wrote: There is this project which actually is not released yet, but something that you want to achieve :) http://pgfoundry.org/projects/pgpitrhaRegards,--- Shoaib MirEnterpriseDB (www.enterprisedb.com) On 10/30/06, Ben Suffolk < [EMAIL PROTECTED]> wrote: > Guys,>> I have been thinking about this and wanted to see if it can be > achived. I wanted to make a 100% failover solution for my postgres> databases. The first thing that comes to my mind is doing it using> WAL logs. Am attaching the diagram for which i will write more here. While its not the solution you were looking at, have you seenPGCluser :- http://pgcluster.projects.postgresql.org/index.htmlI have not tried it, but was looking the other week at various fail- over type solutions and came across it. It seems to be able to dowhat you want.Ben---(end of broadcast)---TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] 100% failover + replication solution
Moiz,Have you tried PGPool? as that comes with a built-in load balancer as well.For PITR in HA scenario, I dont remember where I read but some one did it like this:- Make base backup for the primary server say five time a day (depends on the transactions happening on the db server) - Automate these base backups in a way that these backup are always made on the secondary server- Keep the archiving enabled with your archives been saved directly to a shared disk- Now have the recovery.conf placed in the $PGDATA of secondary server - When you want to switch servers just run the postmaster with recovery file and that way it will make it come up to date with the primary server- Once you have switched to seconday now mark it primary and the primary as seconday and keep on doing the same in a loop You can automate this easily with a few scripts.There is a new feature in 8.2 that let you set the archive_timeout so that after a specific amount of time a new WAL archive is made which makes it easy for low transaction systems where the WAL archive size is not reached that easily so it can be copied it to the archive folder. Hope this helps in your caseThank you,--Shoaib MirEnterpriseDB (www.enterprisedb.com)On 10/30/06, Moiz Kothari <[EMAIL PROTECTED]> wrote: Shoaib,I agree that PGCluster might be a better option, i dont want to go with Slony because of primary key constraints. But PGCluster is a good option, the only concerns are :1) It might slow down the process a bit. as confirmation happens after transaction gets comitted to all the nodes. 2) Its difficult to convince, as it is an external project and if support for the same stops or future versions of postgres does not work, it might be a problem.Can you elaborate more the way PITR for HA being used for primary and secondary servers, maybe u can light a bulb in me for me to go ahead with the approach. I like the idea of using WAL logs because its postgres internal and secondly it would be fastest way of keeping databases in sync without slowing down other servers. Awaiting your reply.Regards,Moiz KothariOn 10/30/06, Shoaib Mir < [EMAIL PROTECTED]> wrote: Hi Moiz,If I had to choose for your case where you want to direct your selects to slave node and inserts/updates on master, I would have opted for Slony or PGCluster. Using PITR for HA can be a good option if you want to switch between primary and secondary server, where you can store the archive files on a shared disk and place a recovery file with in $PGDATA and automate the process where it can run the process of recovery on each primary and seconday like for example 5 times a day as it all depends on the number of transactions happening on the db server. I have seen a few users doing this for routine VACUUM FULL process as a maintanence activity. Thanks,-Shoaib MirEnterpriseDB (www.enterprisedb.com) On 10/30/06, Moiz Kothari < [EMAIL PROTECTED]> wrote: Shoaib, It sure does, i saw PGCLUSTER, but 3 reasons for having a postgres specific solution.1) If pgcluster stops further development, it would be lot more hassel when upgrading to a different version of postgres. 2) Postgres specific solution would help alot going ahead in future. 3) Also architecture of pgcluster might make things slower as it updates complete cluster before confirming the request.There are lots of them available in market, but i think WAL solution should be available, if not then the thought process should be there going ahead. I am expecting a solution out of WAL logs. Let me know if you have any thoughts about it. Regards,Moiz KothariOn 10/30/06, Shoaib Mir < [EMAIL PROTECTED]> wrote: There is this project which actually is not released yet, but something that you want to achieve :) http://pgfoundry.org/projects/pgpitrhaRegards,--- Shoaib MirEnterpriseDB (www.enterprisedb.com) On 10/30/06, Ben Suffolk < [EMAIL PROTECTED]> wrote: > Guys,>> I have been thinking about this and wanted to see if it can be > achived. I wanted to make a 100% failover solution for my postgres> databases. The first thing that comes to my mind is doing it using> WAL logs. Am attaching the diagram for which i will write more here. While its not the solution you were looking at, have you seenPGCluser :- http://pgcluster.projects.postgresql.org/index.htmlI have not tried it, but was looking the other week at various fail- over type solutions and came across it. It seems to be able to dowhat you want.Ben---(end of broadcast)---TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] 100% failover + replication solution
Hi Moiz,This might help you :) --> http://developer.postgresql.org/pgdocs/postgres/warm-standby.htmlThanks,---Shoaib Mir EnterpriseDB (www.enterprisedb.com)On 10/30/06, Andrew Sullivan < [EMAIL PROTECTED]> wrote:On Mon, Oct 30, 2006 at 06:41:54PM +0530, Moiz Kothari wrote: > I agree that PGCluster might be a better option, i dont want to go with> Slony because of primary key constraints.I don't know what the "primary key constraints" issue you have is,but Slony would be inappropriate for a "100% failover" system anyway: you can't know you haven't trapped data on the origin. This is infact true for the WAL shipping you suggested, also. The only way toachieve 100% reliable failover today, with guaranteed no data loss,is to use a system that commits all the data on two machines at the same time in the same transaction. I haven't seen any argument sofar that there is any such system "out of the box", although with twophase commit support available, it would seem that some systems could be extended in that direction.The other answer for all of this is to do it with hardware, butthat's a shared-disk system, so if your disk blows up, you have aproblem. Or, if you're using the operating system of people who don't know how fsck works. I don't know anyone who has that problem;certainly not any vendors whose name starts with 'I' and ends with'M'.> 1) It might slow down the process a bit. as confirmation happens after > transaction gets comitted to all the nodes.Anyone who tells you that you can have completely reliable datareplication with no performance hit is trying to sell you a bridge inBrooklyn. If you want reliable data replication that guarantees you can have automatic failover, you are going to pay for it somehow; thequestion is which compromise you want to make. That seems to besomething you'll need to decide.> 2) Its difficult to convince, as it is an external project and if support > for the same stops or future versions of postgres does not work, it might be> a problem.If you have this problem, probably free software isn't for you.PostgreSQL is a modular system, and people use different components together in deployed systems. This happens to be true of commercialofferings too (if not, you could buy the cheapest version of, say,Oracle and get RAC in the bargain), but they _sell_ it to you asthough it were one big package. To the extent your managers don't understand this, you're always going to have a problem using freesoftware.A--Andrew Sullivan | [EMAIL PROTECTED]In the future this spectacle of the middle classes shocking the avant- garde will probably become the textbook definition of Postmodernism.--Brad Holland---(end of broadcast)---TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] 100% failover + replication solution
Moiz,H, I think pg_controldata output might help you there to get these stats. pg_controldata can be found in the PostgreSQL /bin folder.Thank you,-Shoaib MirEnterpriseDB ( www.enterprisedb.com)On 10/31/06, Moiz Kothari <[EMAIL PROTECTED]> wrote: Shoaib,This sounds really like what i need, but again 8.2 is in beta. Just one question, i dunno if i am thinking in right direction, but is there anyway of finding out the END XID, transaction id and OID of last archived WAL log applied to the database. I was thinking if i can get that value then i can reset XLOGs using pg_resetxlog to that point and then start applying the new WAL logs. Am i thinking it correctly or there is some flaw here. Also if i am thinking it right, then how can i find the details i asked above.Regards,Moiz KothariOn 10/30/06, Shoaib Mir < [EMAIL PROTECTED]> wrote: Hi Moiz,This might help you :) --> http://developer.postgresql.org/pgdocs/postgres/warm-standby.html Thanks,---Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 10/30/06, Andrew Sullivan < [EMAIL PROTECTED]> wrote:On Mon, Oct 30, 2006 at 06:41:54PM +0530, Moiz Kothari wrote: > I agree that PGCluster might be a better option, i dont want to go with> Slony because of primary key constraints.I don't know what the "primary key constraints" issue you have is,but Slony would be inappropriate for a "100% failover" system anyway: you can't know you haven't trapped data on the origin. This is infact true for the WAL shipping you suggested, also. The only way toachieve 100% reliable failover today, with guaranteed no data loss,is to use a system that commits all the data on two machines at the same time in the same transaction. I haven't seen any argument sofar that there is any such system "out of the box", although with twophase commit support available, it would seem that some systems could be extended in that direction.The other answer for all of this is to do it with hardware, butthat's a shared-disk system, so if your disk blows up, you have aproblem. Or, if you're using the operating system of people who don't know how fsck works. I don't know anyone who has that problem;certainly not any vendors whose name starts with 'I' and ends with'M'.> 1) It might slow down the process a bit. as confirmation happens after > transaction gets comitted to all the nodes.Anyone who tells you that you can have completely reliable datareplication with no performance hit is trying to sell you a bridge inBrooklyn. If you want reliable data replication that guarantees you can have automatic failover, you are going to pay for it somehow; thequestion is which compromise you want to make. That seems to besomething you'll need to decide.> 2) Its difficult to convince, as it is an external project and if support > for the same stops or future versions of postgres does not work, it might be> a problem.If you have this problem, probably free software isn't for you.PostgreSQL is a modular system, and people use different components together in deployed systems. This happens to be true of commercialofferings too (if not, you could buy the cheapest version of, say,Oracle and get RAC in the bargain), but they _sell_ it to you asthough it were one big package. To the extent your managers don't understand this, you're always going to have a problem using freesoftware.A--Andrew Sullivan | [EMAIL PROTECTED]In the future this spectacle of the middle classes shocking the avant- garde will probably become the textbook definition of Postmodernism.--Brad Holland---(end of broadcast)---TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] 100% failover + replication solution
Moiz,Yes, EnterpriseDB is compatible with SLES 10.Thank you,-Shoaib MirEnterpriseDB (www.enterprisedb.com) On 11/1/06, Moiz Kothari <[EMAIL PROTECTED]> wrote: Shoaib,Also just so you know, we plan to use SLES 10 OS. so also let me know the compatibility of EnterpriseDB to the OS.Regards,Moiz On 10/31/06, Shoaib Mir <[EMAIL PROTECTED]> wrote: Moiz,H, I think pg_controldata output might help you there to get these stats. pg_controldata can be found in the PostgreSQL /bin folder.Thank you,---------Shoaib Mir EnterpriseDB ( www.enterprisedb.com)On 10/31/06, Moiz Kothari < [EMAIL PROTECTED]> wrote: Shoaib,This sounds really like what i need, but again 8.2 is in beta. Just one question, i dunno if i am thinking in right direction, but is there anyway of finding out the END XID, transaction id and OID of last archived WAL log applied to the database. I was thinking if i can get that value then i can reset XLOGs using pg_resetxlog to that point and then start applying the new WAL logs. Am i thinking it correctly or there is some flaw here. Also if i am thinking it right, then how can i find the details i asked above.Regards,Moiz KothariOn 10/30/06, Shoaib Mir < [EMAIL PROTECTED]> wrote: Hi Moiz,This might help you :) --> http://developer.postgresql.org/pgdocs/postgres/warm-standby.html Thanks,---Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 10/30/06, Andrew Sullivan < [EMAIL PROTECTED]> wrote:On Mon, Oct 30, 2006 at 06:41:54PM +0530, Moiz Kothari wrote: > I agree that PGCluster might be a better option, i dont want to go with> Slony because of primary key constraints.I don't know what the "primary key constraints" issue you have is,but Slony would be inappropriate for a "100% failover" system anyway: you can't know you haven't trapped data on the origin. This is infact true for the WAL shipping you suggested, also. The only way toachieve 100% reliable failover today, with guaranteed no data loss,is to use a system that commits all the data on two machines at the same time in the same transaction. I haven't seen any argument sofar that there is any such system "out of the box", although with twophase commit support available, it would seem that some systems could be extended in that direction.The other answer for all of this is to do it with hardware, butthat's a shared-disk system, so if your disk blows up, you have aproblem. Or, if you're using the operating system of people who don't know how fsck works. I don't know anyone who has that problem;certainly not any vendors whose name starts with 'I' and ends with'M'.> 1) It might slow down the process a bit. as confirmation happens after > transaction gets comitted to all the nodes.Anyone who tells you that you can have completely reliable datareplication with no performance hit is trying to sell you a bridge inBrooklyn. If you want reliable data replication that guarantees you can have automatic failover, you are going to pay for it somehow; thequestion is which compromise you want to make. That seems to besomething you'll need to decide.> 2) Its difficult to convince, as it is an external project and if support > for the same stops or future versions of postgres does not work, it might be> a problem.If you have this problem, probably free software isn't for you.PostgreSQL is a modular system, and people use different components together in deployed systems. This happens to be true of commercialofferings too (if not, you could buy the cheapest version of, say,Oracle and get RAC in the bargain), but they _sell_ it to you asthough it were one big package. To the extent your managers don't understand this, you're always going to have a problem using freesoftware.A--Andrew Sullivan | [EMAIL PROTECTED]In the future this spectacle of the middle classes shocking the avant- garde will probably become the textbook definition of Postmodernism.--Brad Holland---(end of broadcast)---TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] New to PostgreSQL
Another very good resource is the PostgreSQL documentation itself --> http://www.postgresql.org/docs/8.1/static/index.htmlThanks,--- Shoaib MirEnterpriseDB (www.enterprisedb.com)On 11/2/06, Mark Steben < [EMAIL PROTECTED]> wrote: Good afternoon, I am a database administrator who has worked with mainframe DB2 databases for 16 years. I am brand new to PostgreSQL. I just started a new job where I am the only DBA in the company and am expected to ramp up as soon as possible. Are there any courses available to me - either self-taught or Classroom - that could provide me with a broad-brush introduction? Right now my main resource is the PostgreSQL - The Comprehensive Guide, By Korry and Susan Douglas - a very good reference that I am using as I tackle specific tasks. I think a good overview would help me as I basically Self-educate. Thank you and I look forward to working with you through your mailing list. Mark Steben AutoRevenue Lee, Massachusetts [EMAIL PROTECTED]
Re: [ADMIN] Issue with processes open for clients
If you don't need the client connection after query has executed then just close that client connection or you can have it set in your firewall to kill such idle connections who are not doing anything.Thanks, Shoaib MirEnterpriseDB (www.enterprisedb.com)On 11/10/06, jose fuenmayor < [EMAIL PROTECTED]> wrote:Hi all , I have a question or a doubt, when I ran psql remotely it opens a process on the server, so I issue a query (select count(*) from 80millionsrowstable) then I can see that process taking cpu and mem from the server (using top), then I cancel the psql on the client, I expect to see that process gone but that is not the case it is still there! and taking resources!. When I cancelled the query issued by the client is not supposed to eliminate the process open for him on the server? The process stays hanging there? why is that?. Is there any parameter on postgresql or the operating system that can tune? I am using mandriva linux free 2006 kernel 2.6.12 Postgresql 8.0.8 phpPgAdmin 4.0.1 (running in a web server whit php module 5.0.4) Same thing using phpPgAdmin Thanks in advance! I really aprecciate your help!
Re: [ADMIN] What is best link to install latest postgresql on windows xp
You can get that from here --> http://wwwmaster.postgresql.org/download/mirrors-ftp?file=binary%2Fv8.2.0%2Fwin32%2Fpostgresql-8.2.0-1.zip Regards, -- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 12/10/06, Philippe Salama <[EMAIL PROTECTED]> wrote: I just uninstalled at home, because what I had is old and I want the latest. I installed at work, but I forgot the best link to install the latest postgresql and also the odbc for windows so I can link to tables with MS Access. Thanks! -- Everyone is raving about the all-new Yahoo! Mail beta.<http://us.rd.yahoo.com/evt=42297/*http://advision.webevents.yahoo.com/mailbeta>
Re: [ADMIN] Installing postgres now. What is a good shareware admin program?
Windows installer has pgAdmin bundled in, which can serve as a very easy to use database administration software. Thanks, -- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 12/10/06, Philippe Salama <[EMAIL PROTECTED]> wrote: Thanks for install link. Can anyone recommend a good freeware admin program for posgresql. Thanks! Anyone use navicat (which is not free i know). Any opinions? Thanks! *Shoaib Mir <[EMAIL PROTECTED]>* wrote: You can get that from here --> http://wwwmaster.postgresql.org/download/mirrors-ftp?file=binary%2Fv8.2.0%2Fwin32%2Fpostgresql-8.2.0-1.zip Regards, ------ Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 12/10/06, Philippe Salama <[EMAIL PROTECTED]> wrote: > > I just uninstalled at home, because what I had is old and I want the > latest. > > I installed at work, but I forgot the best link to install the latest > postgresql and also the odbc for windows so I can link to tables with MS > Access. > > Thanks! > -- > Everyone is raving about the all-new Yahoo! Mail beta. > <http://us.rd.yahoo.com/evt=42297/*http://advision.webevents.yahoo.com/mailbeta> > -- Everyone is raving about the all-new Yahoo! Mail beta.<http://us.rd.yahoo.com/evt=45083/*http://advision.webevents.yahoo.com/mailbeta>
Re: [ADMIN] which ODBC to install
Once you have installed PostgreSQL you can now use the ANSI or Unicode ODBC driver from the ODBC driver manager in Windows as it gets installed automatically during installation. Thanks, -- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 12/10/06, Philippe Salama <[EMAIL PROTECTED]> wrote: I just installed 8.2.1 postgresql. Now I want an odbc driver to work on my Dell XP with MS Access. The choices are very confusing. Please advise. I am not a rocket scientist. Thanks! -- Want to start your own business? Learn how on Yahoo! Small Business.<http://us.rd.yahoo.com/evt=41244/*http://smallbusiness.yahoo.com/r-index>
Re: [ADMIN] which ODBC to install
Glad to hear that it worked out good for you :-) Thanks, - Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 12/10/06, Philippe Salama <[EMAIL PROTECTED]> wrote: Thanks Shoaib. You are perfectly correct! After I installed postgresql, but without installing any odbc driver, I went to MS Access, and found both the ANSI and the Unicode driver. I think I chose ANSI, ... I was able to link to all the tables which the Celko script defined, and I was able to browse the test data in those tables, and yet PG explorer gave some kind of error when I tried to browse, and I could not for the life of me see where PGAdmin allows me to browse tables. But, anyway I am making progress, and I have the latest postgresql installed, and MS Access can link to the tables. Thanks, all! *Shoaib Mir <[EMAIL PROTECTED]>* wrote: Once you have installed PostgreSQL you can now use the ANSI or Unicode ODBC driver from the ODBC driver manager in Windows as it gets installed automatically during installation. Thanks, ------ Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 12/10/06, Philippe Salama <[EMAIL PROTECTED]> wrote: > > I just installed 8.2.1 postgresql. Now I want an odbc driver to work on > my Dell XP with MS Access. The choices are very confusing. Please advise. I > am not a rocket scientist. Thanks! > -- > Want to start your own business? Learn how on Yahoo! Small Business. > <http://us.rd.yahoo.com/evt=41244/*http://smallbusiness.yahoo.com/r-index> > -- Need a quick answer? Get one in minutes from people who know. Ask your question on Yahoo! Answers<http://answers.yahoo.com/;_ylc=X3oDMTFvbGNhMGE3BF9TAzM5NjU0NTEwOARfcwMzOTY1NDUxMDMEc2VjA21haWxfdGFnbGluZQRzbGsDbWFpbF90YWcx>.
Re: [ADMIN] how to recover database back from /data folder [ Possibly Recovered ]
All looks according to the logs to me... --- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 12/11/06, Rajesh Kumar Mallah <[EMAIL PROTECTED]> wrote: Hi List , I restored the filesystem backup tar and ran postgres 7.3.8 on the data folder. there was no problem at all. DEBUG: FindExec: found "/opt/usr/local/pgsql7.3/bin/postgres" using argv[0] DEBUG: invoking IpcMemoryCreate(size=1466368) DEBUG: FindExec: found "/opt/usr/local/pgsql7.3/bin/postmaster" using argv[0] LOG: database system was shut down at 2006-12-04 12:30:21 IST LOG: checkpoint record is at 0/39DF3C2C LOG: redo record is at 0/39DF3C2C; undo record is at 0/0; shutdown TRUE LOG: next transaction id: 2006905; next oid: 15212626 LOG: database system is ready DEBUG: proc_exit(0) DEBUG: shmem_exit(0) DEBUG: exit(0) DEBUG: reaping dead processes I think the above messages support the fact that the database was shutdown properly before the filesystem level backup. Can anyone kindly confirm it ? I listed the tables and did some selects the data seems to be there. I think postgres was not starting for some peripheral issues not because that data folder was corrupted. BTW when i first started postmaster it gave an error related to too liberal permissions. Probably the original poster was doing the same mistake. Regds mallah.
Re: [ADMIN] how to recover database back from /data folder [ Possibly Recovered ]
I meant "all looks *good* according to the logs" :) ------- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 12/11/06, Shoaib Mir <[EMAIL PROTECTED]> wrote: All looks according to the logs to me... ------- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 12/11/06, Rajesh Kumar Mallah < [EMAIL PROTECTED]> wrote: > > Hi List , > > I restored the filesystem backup tar and ran postgres 7.3.8 on the data > folder. > there was no problem at all. > > DEBUG: FindExec: found "/opt/usr/local/pgsql7.3/bin/postgres" using > argv[0] > DEBUG: invoking IpcMemoryCreate(size=1466368) > DEBUG: FindExec: found "/opt/usr/local/pgsql7.3/bin/postmaster" using > argv[0] > LOG: database system was shut down at 2006-12-04 12:30:21 IST > LOG: checkpoint record is at 0/39DF3C2C > LOG: redo record is at 0/39DF3C2C; undo record is at 0/0; shutdown TRUE > LOG: next transaction id: 2006905; next oid: 15212626 > LOG: database system is ready > DEBUG: proc_exit(0) > DEBUG: shmem_exit(0) > DEBUG: exit(0) > DEBUG: reaping dead processes > > I think the above messages support the fact that the database was > shutdown > properly before the filesystem level backup. > Can anyone kindly confirm it ? > > I listed the tables and did some selects the data seems to be there. > I think postgres was not starting for some peripheral issues > not because that data folder was corrupted. > > BTW > when i first started postmaster it gave an error related to > too liberal permissions. Probably the original poster was doing > the same mistake. > > Regds > mallah. >
Re: [ADMIN] Migration problems with encoding.
Have you tried using 'iconv' for cleaning up the invalid UTF8 character? In case of big files you can always split them. ----- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 12/12/06, Manuel Trujillo <[EMAIL PROTECTED]> wrote: Hi! Firs, I will apologize about my bad english... sorry... I have a database in postgres 8.0 with encoding defined with UTF8. All work fine, but because the new features from 8.2 version, we would like to migrate the database to this new version. The problem: Encoding incompatibilities between the postgresql version. How can i resolve this with a database about 16 Gb? Are there any document about how to solve this? Thank you very much. -- Manuel Trujillo Albarral Responsable de Sistemas VLEX NETWORKS S.L. Telf: 93-272.26.85 ext. 111 Jabber/GoogleTalk [EMAIL PROTECTED] http://www.vlex.com ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [ADMIN] 8.2.0 bug :)
Exactly the same I too got as output without getting a bug in ver 8.2 :) --- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 12/12/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: Hm, i can't see any bug here: template1=# \di+ No relations found. template1=# Works also on a database with existing indexes. Greetings, Matthias > -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] On Behalf Of Gábriel Ákos > Sent: Tuesday, December 12, 2006 1:28 PM > To: pgsql-admin@postgresql.org > Subject: [ADMIN] 8.2.0 bug :) > > > go to psql > issue \di+ > server closes connection :) > > Regards, > Akos > > ---(end of > broadcast)--- > TIP 2: Don't 'kill -9' the postmaster > ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [ADMIN] PostgreSQL replication on Windows 2003 platform.?
You can get the latest easy to use installer from http://www.postgresql.org/ftp/binary/v8.2.0/win32/ Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 12/6/06, Aleksander Slavchev <[EMAIL PROTECTED]> wrote: Hi! I need to build data replication with PostgreSQL, BUT! on Windows 2003 platform. So i start tro read about slony, pgpool and other but I can't find some reasonable descriptyion haw to instal configure and start PostgreSQL instalation on Windows platform. Do you have some useful information? - I have 4 master who will replicate data in one slave(warehouse) - around 3 million records in a week . Best regards, Alex
Re: [ADMIN] Backup
To me PITR looks like a very nice solution for incremental backup and even they can serve as a warm standby. What exactly are the complications you see with WAL archiving? --- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 12/13/06, Eduardo J. Ortega <[EMAIL PROTECTED]> wrote: Hi there: Are there any nice (official or third party) backup utilities for postgres? I have a database which is several GB, so pg_dumping it to file and then bzipping2 every hour is not really the way to go. I've read a little about using WAL for incremental backup, but it sounds a little complicated and *very* user-error prone. (Not sure if this is the right place for it, but i really think that PG's developers should consider including a viable backup utility in the next version) thanks. -- Eduardo J. Ortega - Linux user #222873 "No fake - I'm a big fan of konqueror, and I use it for everything." -- Linus Torvalds ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [ADMIN] Backup
In addition, the backup is performed only everytime the WAL file is filled; i need to take >>consistent backups every hour or so You have the archive_timeout ( http://www.postgresql.org/docs/8.2/static/runtime-config-wal.html) option for that in PG 8.2 to achieve similar functionality Well, I don't really like the fact that admin has to specify the archiving and restoring command; an error here (a lack of -i in cp, for example) may render the backup useless You dont have to take the base backup again and again, just will be needing the WAL archives to restore to a certain point, but if still you want to take the backup you can easily write a cron job and schedule it. If this is it, then I'll end up with an old level zero (i.e. full, base ) backup and A LOT of level 1 (i.e. transaction log) backup. I think it should be more like it's for Informix, where you ask the database to perform an online level 0 backup (base); after this, it stores transaction logs on disk, which you can archive with level 1 backup. I think that can be also possible using something similar to warm standby ( http://www.postgresql.org/docs/8.2/static/warm-standby.html) Hope this helps.... -------- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 12/13/06, Eduardo J. Ortega <[EMAIL PROTECTED]> wrote: Hi: Well, I don't really like the fact that admin has to specify the archiving and restoring command; an error here (a lack of -i in cp, for example) may render the backup useless. In addition, the backup is performed only everytime the WAL file is filled; i need to take consistent backups every hour or so, and I am not sure if that time represents more or less than a WAL file ( i could still measure that, i guess). Finally, as I understand, the WAL backup method works like this: 1) Take full base FS backup 2) get some way to copy WAL files If this is it, then I'll end up with an old level zero (i.e. full, base ) backup and A LOT of level 1 (i.e. transaction log) backup. I think it should be more like it's for Informix, where you ask the database to perform an online level 0 backup (base); after this, it stores transaction logs on disk, which you can archive with level 1 backup. Then, say everyonce in a week, you get another level 0 backup, and the database clears the already-archived logs from disk and starts all over. I guess this could be achieved with PG, but it requieres considerably more steps (pg_start_backup, pg_stop_backup, manually cleaning old log files which could be an error point), or I am getting something wrong. Besides, why do you need to tell the database to stop the backup? shouldn't it stop by itself when there's no more information to be archived? Perhaps if any of you has this method working, you could provide me with your archiving and restoring commands as well as a description of the whole process the way you implemented it. Thanks, Eduardo. On Wednesday 13 December 2006 02:39, Shoaib Mir wrote: > To me PITR looks like a very nice solution for incremental backup and even > they can serve as a warm standby. What exactly are the complications you > see with WAL archiving? > > --- > Shoaib Mir > EnterpriseDB (www.enterprisedb.com) > > On 12/13/06, Eduardo J. Ortega <[EMAIL PROTECTED]> wrote: > > Hi there: > > > > Are there any nice (official or third party) backup utilities for > > postgres? I > > have a database which is several GB, so pg_dumping it to file and then > > bzipping2 every hour is not really the way to go. I've read a little > > about using WAL for incremental backup, but it sounds a little > > complicated and *very* user-error prone. > > > > (Not sure if this is the right place for it, but i really think that PG's > > developers should consider including a viable backup utility in the next > > version) > > > > thanks. > > > > -- > > Eduardo J. Ortega - Linux user #222873 > > "No fake - I'm a big fan of konqueror, and I use it for everything." -- > > Linus > > Torvalds > > > > ---(end of broadcast)--- > > TIP 2: Don't 'kill -9' the postmaster -- Eduardo J. Ortega - Linux user #222873 "No fake - I'm a big fan of konqueror, and I use it for everything." -- Linus Torvalds ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] storage
They are physically stored in the PGDATA folder, for example on a default windoze installation you can find it at: C:\Program Files\PostgreSQL\8.2\data Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 12/13/06, Antonios Katsikadamos <[EMAIL PROTECTED]> wrote: Hi all, Sorry to bother. I would like to know where postgres stores the created database and the tables.Can anyone help, thnx for any help, Antonios __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com
Re: [ADMIN] storage
To be more precise they are in PGDATA\base... Shoaib Mir EnterpriseDB ( www.enterprisedb.com) On 12/13/06, Shoaib Mir <[EMAIL PROTECTED]> wrote: They are physically stored in the PGDATA folder, for example on a default windoze installation you can find it at: C:\Program Files\PostgreSQL\8.2\data ---- Shoaib Mir EnterpriseDB ( www.enterprisedb.com) On 12/13/06, Antonios Katsikadamos <[EMAIL PROTECTED] > wrote: > > Hi all, > > Sorry to bother. I would like to know where postgres stores the created > database and the tables.Can anyone help, > > > thnx for any help, > > Antonios > > __ > Do You Yahoo!? > Tired of spam? Yahoo! Mail has the best spam protection around > http://mail.yahoo.com >
Re: [ADMIN] out of memory in backup and restore
Can you please show the dbserver logs and syslog at the same time when it goes out of memory... Also how much is available RAM you have and the SHMMAX set? Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 12/15/06, Thomas Markus <[EMAIL PROTECTED]> wrote: Hi, i'm running pg 8.1.0 on a debian linux (64bit) box (dual xeon 8gb ram) pg_dump creates an error when exporting a large table with blobs (largest blob is 180mb) error is: pg_dump: ERROR: out of memory DETAIL: Failed on request of size 1073741823. pg_dump: SQL command to dump the contents of table "downloads" failed: PQendcopy() failed. pg_dump: Error message from server: ERROR: out of memory DETAIL: Failed on request of size 1073741823. pg_dump: The command was: COPY public.downloads ... TO stdout; if i try pg_dump with -d dump runs with all types (c,t,p), but i cant restore (out of memory error or corrupt tar header at ...) how can i backup (and restore) such a db? kr Thomas ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] out of memory in backup and restore
Looks like with 1.8 GB usage not much left for dump to get the required chunk from memory. Not sure if that will help but try increasing the swap space... - Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 12/15/06, Thomas Markus <[EMAIL PROTECTED]> wrote: Hi, logfile content see http://www.rafb.net/paste/results/cvD7uk33.html - cat /proc/sys/kernel/shmmax is 2013265920 - ulimit is unlimited kernel is 2.6.15-1-em64t-p4-smp, pg version is 8.1.0 32bit postmaster process usage is 1.8gb ram atm thx Thomas Shoaib Mir schrieb: > Can you please show the dbserver logs and syslog at the same time when > it goes out of memory... > > Also how much is available RAM you have and the SHMMAX set? > > ---- > Shoaib Mir > EnterpriseDB ( www.enterprisedb.com <http://www.enterprisedb.com>) > > On 12/15/06, *Thomas Markus* <[EMAIL PROTECTED] > <mailto:[EMAIL PROTECTED]>> wrote: > > Hi, > > i'm running pg 8.1.0 on a debian linux (64bit) box (dual xeon 8gb ram) > pg_dump creates an error when exporting a large table with blobs > (largest blob is 180mb) > > error is: > pg_dump: ERROR: out of memory > DETAIL: Failed on request of size 1073741823. > pg_dump: SQL command to dump the contents of table "downloads" failed: > PQendcopy() failed. > pg_dump: Error message from server: ERROR: out of memory > DETAIL: Failed on request of size 1073741823. > pg_dump: The command was: COPY public.downloads ... TO stdout; > > if i try pg_dump with -d dump runs with all types (c,t,p), but i cant > restore (out of memory error or corrupt tar header at ...) > > how can i backup (and restore) such a db? > > kr > Thomas > > ---(end of > broadcast)--- > TIP 4: Have you searched our list archives? > >http://archives.postgresql.org > > -- Thomas Markus Tel:+49 30 29 36 399 - 22 Fax:+49 30 29 36 399 - 50 Mail: [EMAIL PROTECTED] Web:www.proventis.net Web:www.blue-ant.de proventis GmbH Zimmerstraße 79-80 10117 Berlin "proventis: Wir bewegen Organisationen." ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] invalid types and tables
It appears to me you dropped the user that was owner of the database objects been warned during dump activity. I guess dropping user was possible in you case as you are using an older release but you cant do the same in 8.1 and onwards. I will recommend recreating the user that was dropped to get around this problem... - Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 12/15/06, Sandy Spence <[EMAIL PROTECTED]> wrote: Hi all admins, Even though I am new to postgres I have been given the job of administrator, in at the deep end you might say. We have a cron job that runs nightly to back up all of our users databases, the output from the cron job shows the following WARNINGs pg_dump: WARNING: owner of data type pga_queries appears to be invalid pg_dump: WARNING: owner of data type pg_toast_557151 appears to be invalid pg_dump: WARNING: owner of data type pga_forms appears to be invalid pg_dump: WARNING: owner of data type pg_toast_557156 appears to be invalid pg_dump: WARNING: owner of data type pga_scripts appears to be invalid pg_dump: WARNING: owner of data type pg_toast_557161 appears to be invalid pg_dump: WARNING: owner of data type pga_reports appears to be invalid pg_dump: WARNING: owner of data type pg_toast_557166 appears to be invalid pg_dump: WARNING: owner of data type pga_schema appears to be invalid pg_dump: WARNING: owner of data type pg_toast_557171 appears to be invalid pg_dump: WARNING: owner of data type pga_layout appears to be invalid pg_dump: WARNING: owner of data type pg_toast_557176 appears to be invalid pg_dump: WARNING: owner of data type adult_results appears to be invalid pg_dump: WARNING: owner of data type pg_toast_557181 appears to be invalid pg_dump: WARNING: owner of data type child_results appears to be invalid pg_dump: WARNING: owner of data type pg_toast_557186 appears to be invalid pg_dump: WARNING: owner of table "pga_queries" appears to be invalid pg_dump: WARNING: owner of table "pga_forms" appears to be invalid pg_dump: WARNING: owner of table "pga_scripts" appears to be invalid pg_dump: WARNING: owner of table "pga_reports" appears to be invalid pg_dump: WARNING: owner of table "pga_schema" appears to be invalid pg_dump: WARNING: owner of table "pga_layout" appears to be invalid pg_dump: WARNING: owner of table "adult_results" appears to be invalid pg_dump: WARNING: owner of table "child_results" appears to be invalid pg_dump: WARNING: owner of data type trigger appears to be invalid My two questions are: 1: How do I find out which owner is being referenced, when I look at a number of user accounts (databases) they all appear to have pga_forms, pga_reposts etc. 2: How do I display a list of the above mentioned types? We are running version 7.2.3 (upgrading in the new year) Thanks in advance for any assistance, Regards, Sandy Spence Computer Officer, RA Certification Manager Department of Computer Science - UWA Llandinam Building Penglais Campus Aberystwyth Ceredigion Wales - UK SY23 3DB Tel: (01970)-622433 Fax: (01970)-628536 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] invalid types and tables
For table ownership you can use the following: select tablename, tableowner from pg_tables; For types: select pg_type.typname, pg_user.usename from pg_user, pg_type where pg_user.usesysid = pg_type.typowner; This is how you can get it in 8.1 onwards, haven't used the version you have so not sure if the above queries will work in it --- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 12/15/06, Sandy Spence <[EMAIL PROTECTED]> wrote: Hi Shoaib, My problem is to determine which user/s this/these is/are, I believe we have had a pg_dump problem long before I took over the support role, users had been removed on a regular basis. There has been a pg_dump ERROR that I believe will have halted the backup process. Is there any way I can get a list of users who have the associated tables and types? Regards, Sandy Computer Officer, RA Certification Manager Department of Computer Science - UWA Llandinam Building Penglais Campus Aberystwyth Ceredigion Wales - UK SY23 3DB Tel: (01970)-622433 Fax: (01970)-628536 -- *From:* [EMAIL PROTECTED] [mailto: [EMAIL PROTECTED] *On Behalf Of *Shoaib Mir *Sent:* 15 December 2006 12:35 *To:* Sandy Spence *Cc:* pgsql-admin@postgresql.org *Subject:* Re: [ADMIN] invalid types and tables It appears to me you dropped the user that was owner of the database objects been warned during dump activity. I guess dropping user was possible in you case as you are using an older release but you cant do the same in 8.1 and onwards. I will recommend recreating the user that was dropped to get around this problem... --------- Shoaib Mir EnterpriseDB (www.enterprisedb.com ) On 12/15/06, Sandy Spence <[EMAIL PROTECTED]> wrote: > > Hi all admins, > > Even though I am new to postgres I have been given the job of > administrator, > in at the deep end you might say. > > We have a cron job that runs nightly to back up all of our users > databases, > the output from the cron job shows the following WARNINGs > > pg_dump: WARNING: owner of data type pga_queries appears to be invalid > pg_dump: WARNING: owner of data type pg_toast_557151 appears to be > invalid > pg_dump: WARNING: owner of data type pga_forms appears to be invalid > pg_dump: WARNING: owner of data type pg_toast_557156 appears to be > invalid > pg_dump: WARNING: owner of data type pga_scripts appears to be invalid > pg_dump: WARNING: owner of data type pg_toast_557161 appears to be > invalid > pg_dump: WARNING: owner of data type pga_reports appears to be invalid > pg_dump: WARNING: owner of data type pg_toast_557166 appears to be > invalid > pg_dump: WARNING: owner of data type pga_schema appears to be invalid > pg_dump: WARNING: owner of data type pg_toast_557171 appears to be > invalid > pg_dump: WARNING: owner of data type pga_layout appears to be invalid > pg_dump: WARNING: owner of data type pg_toast_557176 appears to be > invalid > pg_dump: WARNING: owner of data type adult_results appears to be invalid > pg_dump: WARNING: owner of data type pg_toast_557181 appears to be > invalid > pg_dump: WARNING: owner of data type child_results appears to be invalid > > pg_dump: WARNING: owner of data type pg_toast_557186 appears to be > invalid > pg_dump: WARNING: owner of table "pga_queries" appears to be invalid > pg_dump: WARNING: owner of table "pga_forms" appears to be invalid > pg_dump: WARNING: owner of table "pga_scripts" appears to be invalid > pg_dump: WARNING: owner of table "pga_reports" appears to be invalid > pg_dump: WARNING: owner of table "pga_schema" appears to be invalid > pg_dump: WARNING: owner of table "pga_layout" appears to be invalid > pg_dump: WARNING: owner of table "adult_results" appears to be invalid > pg_dump: WARNING: owner of table "child_results" appears to be invalid > pg_dump: WARNING: owner of data type trigger appears to be invalid > > My two questions are: > > 1: How do I find out which owner is being referenced, when I look at a > number of user accounts (databases) they all appear to have pga_forms, > pga_reposts etc. > > 2: How do I display a list of the above mentioned types? > > We are running version 7.2.3 (upgrading in the new year) > > Thanks in advance for any assistance, > > Regards, > > Sandy Spence > > Computer Officer, RA Certification Manager > Department of Computer Science - UWA > Llandinam Building > Penglais Campus > Aberystwyth > Ceredigion > Wales - UK > SY23 3DB > Tel: (01970)-622433 > Fax: (01970)-628536 > > > > ---(end of broadcast)--- > TIP 4: Have you searched our list archives? > >http://archives.postgresql.org >
Re: [ADMIN] invalid types and tables
Well I will not recommend deleting rows like this from the meta data, but if I will rather try using ALTER TABLE and ALTER TYPE to assign an owner for the missing user referenced type or table For example --> ALTER TYPE name OWNER TO new_owner ---- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 12/15/06, Sandy Spence <[EMAIL PROTECTED]> wrote: Hi Shoaib, I took you sql query and tweaked it a little, I have produced lists that show that at least one entry for the offending types (e.g. one instance of say pga_layout does not have an entry in pg_user.usename, the field is blank) appears to be wrong. Am I correct in my observation, can I just delete the offending row/s? Regards, Sandy Computer Officer, RA Certification Manager Department of Computer Science - UWA Llandinam Building Penglais Campus Aberystwyth Ceredigion Wales - UK SY23 3DB Tel: (01970)-622433 Fax: (01970)-628536 -- *From:* [EMAIL PROTECTED] [mailto: [EMAIL PROTECTED] *On Behalf Of *Shoaib Mir *Sent:* 15 December 2006 12:59 *To:* Sandy Spence *Cc:* pgsql-admin@postgresql.org *Subject:* Re: [ADMIN] invalid types and tables For table ownership you can use the following: select tablename, tableowner from pg_tables; For types: select pg_type.typname, pg_user.usename from pg_user, pg_type where pg_user.usesysid = pg_type.typowner; This is how you can get it in 8.1 onwards, haven't used the version you have so not sure if the above queries will work in it ----------- Shoaib Mir EnterpriseDB ( www.enterprisedb.com) On 12/15/06, Sandy Spence <[EMAIL PROTECTED]> wrote: > > Hi Shoaib, > > My problem is to determine which user/s this/these is/are, I believe we > have had a pg_dump problem long before I took over the support role, users > had been removed on a regular basis. There has been a pg_dump ERROR that I > believe will have halted the backup process. Is there any way I can get a > list of users who have the associated tables and types? > > Regards, > > Sandy > > > Computer Officer, RA Certification Manager > Department of Computer Science - UWA > Llandinam Building > Penglais Campus > Aberystwyth > Ceredigion > Wales - UK > SY23 3DB > Tel: (01970)-622433 > Fax: (01970)-628536 > > > > ------ > *From:* [EMAIL PROTECTED] [mailto: > [EMAIL PROTECTED] *On Behalf Of *Shoaib Mir > *Sent:* 15 December 2006 12:35 > *To:* Sandy Spence > *Cc:* pgsql-admin@postgresql.org > *Subject:* Re: [ADMIN] invalid types and tables > > It appears to me you dropped the user that was owner of the database > objects been warned during dump activity. I guess dropping user was possible > in you case as you are using an older release but you cant do the same in > 8.1 and onwards. > > I will recommend recreating the user that was dropped to get around this > problem... > > - > Shoaib Mir > EnterpriseDB (www.enterprisedb.com ) > > On 12/15/06, Sandy Spence <[EMAIL PROTECTED]> wrote: > > > > Hi all admins, > > > > Even though I am new to postgres I have been given the job of > > administrator, > > in at the deep end you might say. > > > > We have a cron job that runs nightly to back up all of our users > > databases, > > the output from the cron job shows the following WARNINGs > > > > pg_dump: WARNING: owner of data type pga_queries appears to be invalid > > pg_dump: WARNING: owner of data type pg_toast_557151 appears to be > > invalid > > pg_dump: WARNING: owner of data type pga_forms appears to be invalid > > pg_dump: WARNING: owner of data type pg_toast_557156 appears to be > > invalid > > pg_dump: WARNING: owner of data type pga_scripts appears to be invalid > > pg_dump: WARNING: owner of data type pg_toast_557161 appears to be > > invalid > > pg_dump: WARNING: owner of data type pga_reports appears to be invalid > > pg_dump: WARNING: owner of data type pg_toast_557166 appears to be > > invalid > > pg_dump: WARNING: owner of data type pga_schema appears to be invalid > > pg_dump: WARNING: owner of data type pg_toast_557171 appears to be > > invalid > > pg_dump: WARNING: owner of data type pga_layout appears to be invalid > > pg_dump: WARNING: owner of data type pg_toast_557176 appears to be > > invalid > > pg_dump: WARNING: owner of data type adult_results appears to be > > invalid > > pg_dump: WARNING: owner of data type pg_toast_557181 appears to be > > invalid > > pg_dump: WARNING: owner of data type child_results appears to be > > invalid > > pg_dump: WARNING: owner of data type pg_toast_557186 appears to be > > invalid > > pg_dump: WARNING: owner of tabl
Re: [ADMIN] invalid types and tables
Agreed, that will be a good solution for this scenario... Thanks, - Shoaib Mir EnterpriseDB (www,enterprisedb.com) On 12/15/06, Tom Lane <[EMAIL PROTECTED]> wrote: "Shoaib Mir" <[EMAIL PROTECTED]> writes: > Well I will not recommend deleting rows like this from the meta data, but if > I will rather try using ALTER TABLE and ALTER TYPE to assign an owner for > the missing user referenced type or table No, just create a new user with the same usesysid as is shown as the owner in pg_class etc. See the SYSID option to CREATE USER. (FWIW, more recent PG versions prevent you from dropping users that still own objects.) regards, tom lane
Re: [ADMIN] invalid types and tables
Glad to hear that Sandy :) --- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 12/18/06, Sandy Spence <[EMAIL PROTECTED]> wrote: Hi Tom and Shoaib, Thanks for all your help, the penny dropped and I now understand what you meant by SYSID. I have traced the usename and all is now well. Regards, Sandy Computer Officer, RA Certification Manager Department of Computer Science - UWA Llandinam Building Penglais Campus Aberystwyth Ceredigion Wales - UK SY23 3DB Tel: (01970)-622433 Fax: (01970)-628536 -Original Message- From: Tom Lane [mailto:[EMAIL PROTECTED] Sent: 15 December 2006 15:50 To: Shoaib Mir Cc: Sandy Spence; pgsql-admin@postgresql.org Subject: Re: [ADMIN] invalid types and tables "Shoaib Mir" <[EMAIL PROTECTED]> writes: > Well I will not recommend deleting rows like this from the meta data, > but if I will rather try using ALTER TABLE and ALTER TYPE to assign an > owner for the missing user referenced type or table No, just create a new user with the same usesysid as is shown as the owner in pg_class etc. See the SYSID option to CREATE USER. (FWIW, more recent PG versions prevent you from dropping users that still own objects.) regards, tom lane
Re: [ADMIN] pg_clog
You can try creating the missing clog file as 256K of zeros, once that is done you can now try going through the pg_dump process again. This suggestion might help you as well --> http://archives.postgresql.org/pgsql-admin/2003-02/msg00263.php ----- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 12/18/06, Edoardo Ceccarelli <[EMAIL PROTECTED]> wrote: Hi, just encountered this error trying to dump my db: any ideas? Thank you pg_dump: ERROR: could not access status of transaction 1629514106 DETAIL: could not open file "pg_clog/0612": No such file or directory pg_dump: SQL command to dump the contents of table "annuncio400" failed: PQendcopy() failed. pg_dump: Error message from server: ERROR: could not access status of transaction 1629514106 DETAIL: could not open file "pg_clog/0612": No such file or directory pg_dump: The command was: COPY public.annuncio400 (testo, telef1, pref1, rubric, tiprec, uscita, telef2, pref2, giornouscita, meseuscita, annouscita, prezzo, annofabbr, locali, codedi, codcliente, email, tipoannuncio, ka) TO stdout;
Re: [ADMIN] OUTER JOIN IS SLOW
Try increasing the work_mem first to see the change, that might help. - Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 12/24/06, Benjamin Arai <[EMAIL PROTECTED]> wrote: I have been running pieces of my PL function by hand and I have found that the following queries work by themselves taking less than a second to execute. getDateRange"('12/1/2005','12/1/2006') <- simply generates a date list. Doesn't even access a table SELECT * FROM mutualfd_weekday_qbid WHERE cusip='92193920' AND pkey >= '12/15/2005' AND pkey <= '12/15/2006'; But when combined as below it takes 10 seconds to execute. SELECT d1 as date, d2.data as data FROM "getDateRange"('12/1/2005','12/1/2006') d1 FULL OUTER JOIN (SELECT * FROM mutualfd_weekday_qbid WHERE cusip='92193920' AND pkey >= '12/15/2005' AND pkey <= '12/15/2006') d2 ON d1=d2.pkey; Do I need to increase the work_mem or is this possible still a ANALYZE issue? Benjamin ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [ADMIN] OUTER JOIN IS SLOW
adding to the last email, for now try the work_mem but you should be adding ANALYZE along with the VACUUM (with a cron job I guess) you do regularly. Shoaib Mir EntperpriseDB (www.enterprisedb.com) On 12/24/06, Shoaib Mir <[EMAIL PROTECTED]> wrote: Try increasing the work_mem first to see the change, that might help. - Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 12/24/06, Benjamin Arai <[EMAIL PROTECTED]> wrote: > > I have been running pieces of my PL function by hand and I have found > that the following queries work by themselves taking less than a second > to execute. > > getDateRange"('12/1/2005','12/1/2006') <- simply generates a date > list. Doesn't even access a table > > SELECT * FROM mutualfd_weekday_qbid WHERE cusip='92193920' AND pkey >= > '12/15/2005' AND pkey <= '12/15/2006'; > > But when combined as below it takes 10 seconds to execute. > > SELECT d1 as date, d2.data as data FROM > "getDateRange"('12/1/2005','12/1/2006') d1 FULL OUTER JOIN (SELECT * > FROM mutualfd_weekday_qbid WHERE cusip='92193920' AND pkey >= > '12/15/2005' AND pkey <= '12/15/2006') d2 ON d1=d2.pkey; > > Do I need to increase the work_mem or is this possible still a ANALYZE > issue? > > Benjamin > > ---(end of broadcast)--- > > TIP 7: You can help support the PostgreSQL project by donating at > > http://www.postgresql.org/about/donate >
Re: [GENERAL] [ADMIN] OUTER JOIN IS SLOW
Yes, ANALYZE should definitely improve the performance for query... -- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 12/24/06, Benjamin Arai <[EMAIL PROTECTED]> wrote: Just to make things more clear I ran EXPLAIN ANALYZE on the slow query. I got Merge Full Join (cost=62.33..73.36 rows=1000 width=19) (actual time=39.205..8521.644 rows=272 loops=1) Merge Cond: ("outer".pkey = "inner".d1) -> Index Scan using mutualfd_weekday_qbid_pkey_idx on mutualfd_weekday_qbid (cost=0.00..6.01 rows=1 width=19) (actual time=34.579..8510.801 rows=253 loops=1) Index Cond: ((pkey >= '2005-12-15'::date) AND (pkey <= '2006-12-15'::date)) Filter: (cusip = '92193920'::text) -> Sort (cost=62.33..64.83 rows=1000 width=4) (actual time=4.604..5.674 rows=262 loops=1) Sort Key: d1.d1 -> Function Scan on "getDateRange" d1 (cost=0.00..12.50 rows=1000 width=4) (actual time=2.591..3.591 rows=262 loops=1) Total runtime: 8522.894 ms (9 rows) It looks like this line: "-> Index Scan using mutualfd_weekday_qbid_pkey_idx on mutualfd_weekday_qbid (cost=0.00..6.01 rows=1 width=19) (actual time=34.579..8510.801 rows=253 loops=1)" is the problem. As I understand it is using the index but it is low as dirt. Hopefully, the -z will fix this. I also ran the same query but with earlier dates in the table and the query ran much faster. Merge Full Join (cost=3492.48..3505.60 rows=1034 width=19) (actual time=7.605..12.851 rows=273 loops=1) Merge Cond: ("outer".d1 = "inner".pkey) -> Sort (cost=62.33..64.83 rows=1000 width=4) (actual time=4.586..5.530 rows=263 loops=1) Sort Key: d1.d1 -> Function Scan on "getDateRange" d1 (cost=0.00..12.50 rows=1000 width=4) (actual time=2.493..3.515 rows=263 loops=1) -> Sort (cost=3430.15..3432.74 rows=1034 width=19) (actual time=2.998..3.971 rows=263 loops=1) Sort Key: mutualfd_weekday_qbid.pkey -> Index Scan using mutualfd_weekday_qbid_pkey on mutualfd_weekday_qbid (cost=0.00..3378.38 rows=1034 width=19) (actual time=0.075..1.843 rows=263 loops=1) Index Cond: ((cusip = '92193920'::text) AND (pkey >= '1999-12-15'::date) AND (pkey <= '2000-12-15'::date)) Total runtime: 13.935 ms (10 rows) In this case it only took 13.935ms as compared to 8522.894 ms for the newer data. Benjamin Shoaib Mir wrote: > ....adding to the last email, for now try the work_mem but you should > be adding ANALYZE along with the VACUUM (with a cron job I guess) you > do regularly. > > > Shoaib Mir > EntperpriseDB ( www.enterprisedb.com <http://www.enterprisedb.com>) > > On 12/24/06, *Shoaib Mir* <[EMAIL PROTECTED] > <mailto:[EMAIL PROTECTED]>> wrote: > > Try increasing the work_mem first to see the change, that might help. > > - > Shoaib Mir > EnterpriseDB ( www.enterprisedb.com <http://www.enterprisedb.com>) > > On 12/24/06, *Benjamin Arai* <[EMAIL PROTECTED] > <mailto:[EMAIL PROTECTED]>> wrote: > > I have been running pieces of my PL function by hand and I > have found > that the following queries work by themselves taking less than > a second > to execute. > > getDateRange"('12/1/2005','12/1/2006') <- simply generates a > date > list. Doesn't even access a table > > SELECT * FROM mutualfd_weekday_qbid WHERE cusip='92193920' > AND pkey >= > '12/15/2005' AND pkey <= '12/15/2006'; > > But when combined as below it takes 10 seconds to execute. > > SELECT d1 as date, d2.data as data FROM > "getDateRange"('12/1/2005','12/1/2006') d1 FULL OUTER JOIN > (SELECT * > FROM mutualfd_weekday_qbid WHERE cusip='92193920' AND pkey >= > '12/15/2005' AND pkey <= '12/15/2006') d2 ON d1=d2.pkey; > > Do I need to increase the work_mem or is this possible still a > ANALYZE > issue? > > Benjamin > > ---(end of > broadcast)--- > TIP 7: You can help support the PostgreSQL project by donating at > > http://www.postgresql.org/about/donate > <http://www.postgresql.org/about/donate> > > >
Re: [ADMIN] DB problem
I am not sure if these option were supported with 7.3 (never used it) for logging but that does go good with 8.1 and onwards... Why don't you install the latest PostgreSQL release and test that out? --- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 12/26/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: Hi, We are using PostgreSQL 7.3 version, installed in linux system. Only DB is getting shut down 3-4 times a day. From my application end I get "Sorry too many clients" error. Refering the FAQ I maximized the DB session limit to 200. I think the logs are not enabled in my DB. So I set the following options in postgresql.con file log_timestamp= true log_directory=pg_log log_filename=%Y-%m-%d_%H%M%S.log log_disconnections=true log_hostname=true log_statement=true After restarting the system got the following error : [EMAIL PROTECTED] root]# su postgres bash-2.05b$ postmaster -i -D /var/lib/pgsql/data FATAL: postgresql.conf:131: syntax error, token="%" bash-2.05b$ postmaster -i -D /var/lib/pgsql/data FATAL: postgresql.conf:131: syntax error, token="%" bash-2.05b$ postmaster -i -D /var/lib/pgsql/data FATAL: postgresql.conf:131: syntax error, token="%" bash-2.05b$ postmaster -i -D /var/lib/pgsql/data FATAL: 'log_directory' is not a valid option name bash-2.05b$ postmaster -i -D /var/lib/pgsql/data FATAL: 'log_directory' is not a valid option name bash-2.05b$ postmaster -i -D /var/lib/pgsql/data FATAL: 'log_filename' is not a valid option name bash-2.05b$ postmaster -i -D /var/lib/pgsql/data FATAL: 'log_disconnections' is not a valid option name bash-2.05b$ postmaster -i -D /var/lib/pg When you install the DB, the logs are enabled by default? Would you pls tell me still what setting need to be done? Thanks, Vinayak V. Raikar Extn : 143
Re: [ADMIN] About Monitoring Software
'EnterpriseDB Management Server' ( http://www.enterprisedb.com/products/enterprisedb_mgmtsvr.do) can be a good option as with your requirements ------- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 12/27/06, Rajesh Kumar Mallah <[EMAIL PROTECTED]> wrote: On 12/27/06, Iannsp <[EMAIL PROTECTED]> wrote: > > Hi, > I did like to know with some one here use some think like nagios to > monitoring postgresql. Dear Iannsp, Nagios is a general purpose monitoring and alerting tool. Any service can have only three states NORMAL , WARNING and CRITICAL, there are plugins(program) both inbuilt or custom made that can probe for a parameter and update nagios database. Threshold for warning and critical states are defined for each service monitored. We use nagios for monitoring port 5432. It is also possible to make nagios run "SELECT 1;" so that you really know that database is answering queries. I said this because I'm work in one software to make this and I never > find onde tool to monitoring postgresql with adminsitractive data like > actual run proccess and the query's Possible. , number of connections, Possible user logged, > more commom values listed by field in the table,Use of disk, index and > another things. Diskspace monitoring Some one people know one tool to monitoring this informations using web > browser and have permission to change some one datas in the > postgresql.conf, pg_hba.conf ? Nagios is not suitable for above, its monitoring and alerting tool mainly. You may consider writing custom software using rrdtools for archiving historical data. Regds mallah. > Thank you. > > > > -- > Ivo Nascimento > Iann tech - Desenvolvendo soluções com performance e segurança > http://www.ianntech.com.br > > > > ---(end of broadcast)--- > TIP 1: if posting/reading through Usenet, please send an appropriate >subscribe-nomail command to [EMAIL PROTECTED] so that your >message can get through to the mailing list cleanly >
Re: [ADMIN] Dump and Query
You can use the COPY ( http://www.postgresql.org/docs/current/static/sql-copy.html) command for doing so An example will be --> COPY (SELECT * FROM country WHERE country_name LIKE 'A%') TO '/usr1/proj/bray/sql/a_list_countries.copy'; ----- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 12/27/06, Enrico <[EMAIL PROTECTED]> wrote: Is there any way to make a dump from a query? For example if my query is: select field1,field2 from table Does it exist a shell command like pg_dump --QUERY myquery -f myfile? Have a nice day Enrico -- If Bill Gates had a penny for everytime Windows crashed,he'd be a multi-billionaire by now ...oh look, he already is [EMAIL PROTECTED] - Skype:sscotty71 http://www.linuxtime.it/enricopirozzi ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [ADMIN] Replication
Did you try Slony with PostgreSQL? -- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 1/4/07, anorganic anorganic <[EMAIL PROTECTED]> wrote: Hello, i need make with postgre something as this: i have two servers S1 and S2, connection is quit good, but sometimes once or twice per month there is short disconnection, longer discoonection 2x per year (1-2+ hours) on both are running updates i need have same data in time, but i never update same collumns on S1 an S2, S1 change another atributes than S2 = there are no conflicts, on MS SQL i use transactional replication and i like find somethin same for postgre. Can u please send me some info where can i found what i need, which postgre tool is right way for me.. Thnx a lot.
Re: [ADMIN] Autovacuum not running in 8.2.0
What is your current logging level that is been set in the conf file? -- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 1/4/07, Ming <[EMAIL PROTECTED]> wrote: Hi, I recently upgraded from 8.1.4 to 8.2.0, everything was fine except that I suspect that the autovacuum was not running. In pgAdmin's server properties, the "Autovacuum" property shows "runnning", but I couldn't find any log entry for the autovacuum process. I've set the new postgresql.conf to match the old one (from the 8.1.4 install), especially the autovacuum entries, but still I couldn't find any log entry for the process (The logging works fine, I tried to supply a wrong login password, and the login failure was successfully logged). Has anyone experienced similar issue? Regards, Ming ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [ADMIN] I want to import a database from a customer
Use pg_dump/pg_dump utility to take the db dump and then you can restore it using psql or pg_restore at your end. -- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 1/4/07, Josef Springer <[EMAIL PROTECTED]> wrote: Hi folks, i must analyze a customer's database offline. Can i import the databse of the customer into our PostgreSQL environment ? What must the customer send us ? Whats the doings for us to open it in our envioronment ? Josef ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [ADMIN] Can not connect to database in Workgroup-Net on Windows-XP
I can understand the exact error as its not in english :) but it seems like an authentication problem. Please change 'md5' to 'trust' in pg_hba.conf file and then use: pg_ctl -D reload That will reload the settings for authentication. Now once the settings are reloaded try reconnecting to the database and it should go fine this time.... -------- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 1/15/07, Josef Springer <[EMAIL PROTECTED]> wrote: Hi folks, i have a workgroup (not a classic server) with Windows-XP-machines: the database server machine contains the line *host allall 0.0.0.0 0.0.0.0md5* in the file pg_hba.conf and *listen_addresses = '*' * in the file postgresql.conf. The clients' host file contain the line *nnn.nnn.nnn.nnn Labor* where nnn.nnn.nnn. is the IP-adress of the database server machine and /Labor /is the machine's name. If i start PGAdmin III on the 'client' machine i can not connect to the database on the machine Labor no matter wether i use the machine's name or the IP-adress. Instead i get the message * "Zum Server ist keine Verbindung möglich, * * oder Login/Passwort sind falsch.* * Die Anmeldung wurde abgelehnt."* Please help me. What to do ? mit freundlichen Grüßen / best regards, Josef Springer (Geschäftsleitung/Management) [image: Postal] Address [EMAIL PROTECTED] Orlando-di-Lasso Str. 2 D-85640 Putzbrunn [image: Phone] Office +49 (0)89 600 6920 [image: Phone] Fax +49 (0)89 600 69220 [image: Web] Web http://www.joops.com [image: JOOPS] *-- the software company --*
Re: [ADMIN] Can not connect to database in Workgroup-Net on Windows-XP
The error you are getting is because of the password not been passed correctly so 'trust' will let you get in the server without specifying the correct password, once you are in now you can change the password as you like using ALTER USER command. Once its done change it back to md5 as it was before. ------ Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 1/15/07, Josef Springer <[EMAIL PROTECTED]> wrote: Hi Mir, thanks for your quick answer. We will try this. For my unterstanding: Why should i use *trust *instead of *md5 * ? A configuration at another customer (with a *domain *setting) using *md5 *works fine ? mit freundlichen Grüßen / best regards, Josef Springer (Geschäftsleitung/Management) [image: Postal] Address [EMAIL PROTECTED] Orlando-di-Lasso Str. 2 D-85640 Putzbrunn [image: Phone] Office +49 (0)89 600 6920 [image: Phone] Fax +49 (0)89 600 69220 [image: Web] Web http://www.joops.com [image: JOOPS] *-- the software company --* Shoaib Mir wrote: I can understand the exact error as its not in english :) but it seems like an authentication problem. Please change 'md5' to 'trust' in pg_hba.conf file and then use: pg_ctl -D reload That will reload the settings for authentication. Now once the settings are reloaded try reconnecting to the database and it should go fine this time Shoaib Mir EnterpriseDB ( www.enterprisedb.com ) On 1/15/07, Josef Springer <[EMAIL PROTECTED] > wrote: > > Hi folks, > > i have a workgroup (not a classic server) with Windows-XP-machines: > > the database server machine contains the line > *host allall 0.0.0.0 0.0.0.0 md5* > in the file pg_hba.conf > and > *listen_addresses = '*' * > in the file postgresql.conf. > > The clients' host file contain the line > *nnn.nnn.nnn.nnn Labor* where nnn.nnn.nnn. is the IP-adress of > the database server machine and /Labor /is the machine's name. > > If i start PGAdmin III on the 'client' machine i can not connect to > the database on the machine Labor no matter wether i use the machine's > name > or the IP-adress. Instead i get the message > > * "Zum Server ist keine Verbindung möglich, * > > * oder Login/Passwort sind falsch.* > > * Die Anmeldung wurde abgelehnt."* > > > Please help me. What to do ? > > mit freundlichen Grüßen / best regards, > Josef Springer > (Geschäftsleitung/Management) > > [image: Postal] > Address > * [EMAIL PROTECTED] * > Orlando-di-Lasso Str. 2 > D-85640 Putzbrunn > [image: Phone] > Office > +49 (0)89 600 6920 > > > [image: Phone] > Fax > +49 (0)89 600 69220 > > > [image: Web] > Web > http://www.joops.com > > > > [image: JOOPS] > *-- the software company --* > > > > >
Re: [ADMIN] Incremental Backup of a particular database
Even with ver 8.1 you can use WAL archive backups (PITR) to implement an incremental backup strategy. - Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 1/17/07, Andy Shellam (Mailing Lists) < [EMAIL PROTECTED]> wrote: I believe the option for a "warm standby" (which sounds like it is what you need) was made available in 8.2.0. You'd have to look at the manual though, I haven't had time to investigate, and I'm not sure whether it's cluster-wide, or database-specific. Yogvinder Singh wrote: > Hi Ppl, > > I have Two PostgreSQL Database Servers. The situation is like this:- > 1) On first database server ball the operations (i.e live server) will > be done. This server may contain n number of databases. > > 2) On the second database server, i want to keep one particular > database (from the first database server ) and keep it updated to the > latest from the First Database Server. I just want to be able to take > some sort of incremental backup from the first server and restore it > to the second server. > > Is this situation feasible. How can i take an incremental backup of a > particular database out of many databases? On which version this is > supported? IF not supported, is there any alternative way to handle > this situatio. > > Regards, > Yogvinder Singh, > !DSPAM:37,45ae2b2f137105304223843! Disclaimer :- This e-mail message > including any attachment may contain confidential, proprietary or > legally privileged information. It should not be used by who is not > the original intended recipient. If you have erroneously received this > message, you are notified that you are strictly prohibited from using, > copying, altering or disclosing the content of this message. Please > delete it immediately and notify the sender. Newgen Software > Technologies Ltd and / or its subsidiary Companies accept no > responsibility for loss or damage arising from the use of the > information transmitted by this email including damage from virus and > further acknowledges that any views expressed in this message are > those of the individual sender and no binding nature of the message > shall be implied or assumed unless the sender does so expressly with > due authority of Newgen Software Technologies Ltd and / or its > subsidiary Companies, as applicable. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [ADMIN] Incremental Backup of a particular database
Yes, I guess that is true with 8.2 but with a few configurations for WAL copying scripts. Saw the following in documentation that specifies a good flow for doing it --> http://www.postgresql.org/docs/8.2/static/warm-standby.html --- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 1/17/07, Andy Shellam (Mailing Lists) < [EMAIL PROTECTED]> wrote: Shoaib Mir wrote: > Even with ver 8.1 you can use WAL archive backups (PITR) to implement > an incremental backup strategy. Yes, true, but it's harder with 8.1 as you always have to start from a base backup and roll WAL logs forward - without a custom script you cannot "wait" for new WAL logs to arrive and apply them on top of a base backup. There is a project in PgFoundry to allow for warm-standby in the 8.0 and 8.1 series, however this is reportedly built-in to 8.2 now. > > - > Shoaib Mir > EnterpriseDB (www.enterprisedb.com <http://www.enterprisedb.com>) > > On 1/17/07, *Andy Shellam (Mailing Lists)* > <[EMAIL PROTECTED] > <mailto:[EMAIL PROTECTED]>> wrote: > > I believe the option for a "warm standby" (which sounds like it is > what > you need) was made available in 8.2.0. > You'd have to look at the manual though, I haven't had time to > investigate, and I'm not sure whether it's cluster-wide, or > database-specific. > > > Yogvinder Singh wrote: > > Hi Ppl, > > > > I have Two PostgreSQL Database Servers. The situation is like this:- > > 1) On first database server ball the operations (i.e live > server) will > > be done. This server may contain n number of databases. > > > > 2) On the second database server, i want to keep one particular > > database (from the first database server ) and keep it updated > to the > > latest from the First Database Server. I just want to be able to > take > > some sort of incremental backup from the first server and restore it > > to the second server. > > > > Is this situation feasible. How can i take an incremental backup > of a > > particular database out of many databases? On which version this is > > supported? IF not supported, is there any alternative way to handle > > this situatio. > > > > Regards, > > Yogvinder Singh, > > Disclaimer :- This e-mail message > > including any attachment may contain confidential, proprietary or > > legally privileged information. It should not be used by who is not > > the original intended recipient. If you have erroneously > received this > > message, you are notified that you are strictly prohibited from > using, > > copying, altering or disclosing the content of this message. Please > > delete it immediately and notify the sender. Newgen Software > > Technologies Ltd and / or its subsidiary Companies accept no > > responsibility for loss or damage arising from the use of the > > information transmitted by this email including damage from > virus and > > further acknowledges that any views expressed in this message are > > those of the individual sender and no binding nature of the message > > shall be implied or assumed unless the sender does so expressly > with > > due authority of Newgen Software Technologies Ltd and / or its > > subsidiary Companies, as applicable. > > ---(end of > broadcast)--- > TIP 3: Have you checked our extensive FAQ? > >http://www.postgresql.org/docs/faq > > > !DSPAM:37,45ae3b61137103926781350!
Re: [ADMIN] Incremental Backup of a particular database
When the pg_dump process starts it takes one point and then makes a backup for that specific time even if more updates on the database are happening during the pg_dump process. So when you restore on the other machine it will restore to that specific time when the pg_dump process started. Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 1/18/07, Yogvinder Singh <[EMAIL PROTECTED]> wrote: pg_dump is a utility for saving a PostgreSQL database into a script or an archive file. The script files are in plain-text format and contain the SQL commands required to reconstruct the database to the state it was in at the time it was saved. "At the time it was saved" What is the meaning of this statement? Regards, Yogvinder Singh, Software Engineer, E-WorkStyle Group, Newgen Software Technologies Ltd. D-152, Okhla Phase - 1 New Delhi Tel no 26815467 Ext 143 - Original Message - From: "Yogvinder Singh" <[EMAIL PROTECTED]> To: "Andy Shellam (Mailing Lists)" <[EMAIL PROTECTED]> Cc: Sent: Thursday, January 18, 2007 10:44 AM Subject: Re: [ADMIN] Incremental Backup of a particular database > I only want one of many databases to be restored to the other server > incrementally. > Is this possible? > If yes, Wht are the steps?? > > Regards, > Yogvinder Singh, > Software Engineer, E-WorkStyle Group, > Newgen Software Technologies Ltd. > D-152, Okhla Phase - 1 > Tel no 26815467 Ext 143 > > > - Original Message - > From: "Andy Shellam (Mailing Lists)" <[EMAIL PROTECTED]> > To: "Yogvinder Singh" <[EMAIL PROTECTED]> > Cc: > Sent: Wednesday, January 17, 2007 8:17 PM > Subject: Re: [ADMIN] Incremental Backup of a particular database > > > > I believe the option for a "warm standby" (which sounds like it is what > > you need) was made available in 8.2.0. > > You'd have to look at the manual though, I haven't had time to > > investigate, and I'm not sure whether it's cluster-wide, or > > database-specific. > > > > > > Yogvinder Singh wrote: > > > Hi Ppl, > > > > > > I have Two PostgreSQL Database Servers. The situation is like this:- > > > 1) On first database server ball the operations (i.e live server) will > > > be done. This server may contain n number of databases. > > > > > > 2) On the second database server, i want to keep one particular > > > database (from the first database server ) and keep it updated to the > > > latest from the First Database Server. I just want to be able to take > > > some sort of incremental backup from the first server and restore it > > > to the second server. > > > > > > Is this situation feasible. How can i take an incremental backup of a > > > particular database out of many databases? On which version this is > > > supported? IF not supported, is there any alternative way to handle > > > this situatio. > > > > > > Regards, > > > Yogvinder Singh, > > > !DSPAM:37,45ae2b2f137105304223843! Disclaimer :- This e-mail message > > > including any attachment may contain confidential, proprietary or > > > legally privileged information. It should not be used by who is not > > > the original intended recipient. If you have erroneously received this > > > message, you are notified that you are strictly prohibited from using, > > > copying, altering or disclosing the content of this message. Please > > > delete it immediately and notify the sender. Newgen Software > > > Technologies Ltd and / or its subsidiary Companies accept no > > > responsibility for loss or damage arising from the use of the > > > information transmitted by this email including damage from virus and > > > further acknowledges that any views expressed in this message are > > > those of the individual sender and no binding nature of the message > > > shall be implied or assumed unless the sender does so expressly with > > > due authority of Newgen Software Technologies Ltd and / or its > > > subsidiary Companies, as applicable. > > > > ---(end of broadcast)--- > > TIP 3: Have you checked our extensive FAQ? > > > >http://www.postgresql.org/docs/faq > > > > > > Disclaimer :- This e-mail message including any attachment may contain confidential, proprietary or legally privileged information. It should not be used by who is not the original intended recipient. If you have erroneously received this message, you are notified that you are strictly prohibited from us
Re: [ADMIN] Install XML on Postgres 8.1
Can we compare the same powerful like Oracle has on Xml? I don't think so that current xml2 support is according to the SQL 2003 standards and for now it is I guess just the basic one. -- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 1/22/07, Rajesh Kumar Mallah <[EMAIL PROTECTED]> wrote: On 1/22/07, Alexander B. <[EMAIL PROTECTED]> wrote: > Hi, > I need to install XML on Postgres 8.1. XML support is provided in PostgreSQL via a contrib module xml2. look for xml related folders in contrib folder after untarring the pgsql source. go through the README files on contrib modules to know the features. > What's the simples steps that I can follow? how have you installed postgresql ? > I found on the internet, but there are many versions, packages and ways > to install. > What's the right way to install? have you installed from source or rpm ? > > Can we compare the same powerful like Oracle has on Xml? what is your requirement ? postgresql xml2 module i think has xslt and xpath support. check the README file of xml2 folder. > > If you could help me!! > Thanks in advance. > > > > > > ___ > Yahoo! Mail - Sempre a melhor opção para você! > Experimente já e veja as novidades. > http://br.yahoo.com/mailbeta/tudonovo/ > > ---(end of broadcast)--- > TIP 5: don't forget to increase your free space map settings > ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [ADMIN] Install XML on Postgres 8.1
Do we have XML docs indexing mechanism in the todo's for 8.3? as this is one of the problem I faced while moving the Oracle like XML functionality to PostgreSQL using xml2 contrib. -- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 1/24/07, Peter Eisentraut <[EMAIL PROTECTED]> wrote: Shoaib Mir wrote: > > Can we compare the same powerful like Oracle has on Xml? > > I don't think so that current xml2 support is according to the SQL > 2003 standards and for now it is I guess just the basic one. The functionality of the xml2 contrib module is almost completely orthogonal to what the SQL standard says in matters of XML. Whether either of these are equally "powerful" as what Oracle offers, I can't say, but since we're currently working on enhanced XML functionality in 8.3, I invite comments on that. -- Peter Eisentraut http://developer.postgresql.org/~petere/
Re: [ADMIN] Install XML on Postgres 8.1
The following will explain in detail how Oracle handles the XML data and also the indexing and searching mechanism for it: http://www.oracle.com/technology/oramag/oracle/04-sep/o54xml.html - Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 1/24/07, Peter Eisentraut <[EMAIL PROTECTED]> wrote: Shoaib Mir wrote: > Do we have XML docs indexing mechanism in the todo's for 8.3? as this > is one of the problem I faced while moving the Oracle like XML > functionality to PostgreSQL using xml2 contrib. Indexing of data types is done in terms of operator classes (or maybe operator families in 8.3, not sure), but so far no one has proposed to define any operators whatsoever on the XML type, so I don't see where any indexing mechanism would come from at the moment. What does Oracle offer? -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [ADMIN] Case-sensitive
'citext' module (http://gborg.postgresql.org/project/citext/projdisplay.php) might help you with this ------- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 1/25/07, Alexander B. <[EMAIL PROTECTED]> wrote: Hi, One developer asked me about case-sensitive, if its possible disable Postgres not consider when using comparative functions. Eg.: select * from people where upper(name) like upper('A%'); Is there any parameter that don't take in account case-sensitive, or any other way to treat?? Thanks ___ Yahoo! Mail - Sempre a melhor opção para você! Experimente já e veja as novidades. http://br.yahoo.com/mailbeta/tudonovo/ ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [ADMIN] 8.1.3 Problem
It looks like to be a data corruption case a few cases like this have been discussed in the past as well. The following link --> http://archives.postgresql.org/pgsql-general/2006-12/msg01546.php might help you resolve the problem. -- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 2/1/07, Chris Hoover <[EMAIL PROTECTED]> wrote: I am starting to have some significant problems with one of my 8.1.3databases. I am getting more and more of these errors: wbd,20552,dholton,2007-01-31 15:32:27.137 EST,45c0fcb4.5048,16,2007-01-31 15:31:48 EST,342126177,UPDATE,ERROR: could not access status of transaction 253762136 wbd,20552,dholton,2007-01-31 15:32:27.137 EST,45c0fcb4.5048,17,2007-01-31 15:31:48 EST,342126177,UPDATE,DETAIL: could not open file "pg_clog/00F2": No such file or directory This is happening more and more, and on seeming random databases. Any idea what is going on? I have already dumped and reloaded 2-3 of my databases and now have about 3 more suffering from this. Right now, I can't upgrade to 8.1.6 until around March (have to wait until the current release of our application is released). I have 7 databases running 8.1.3 (6 prod and 1 test/dev) and only the test/dev is exhibiting this problem Any help/ideas on what is going on and how to fix would be appreciated. Chris
Re: [ADMIN] Install Tsearch2
It already is there in the contrib folder within the source. While in the tsearch2 folder you just need to do a 'make' and 'make install' and then then run the tsearch.sql file so that you can register the tsearch2 functions. -- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 2/2/07, Alexander B. <[EMAIL PROTECTED]> wrote: Hi, I need to install tsearch2, but I couldn't find a procedure (step-by-step). Could you recomend some site or some steps to install. I used PG 8 on Suse and Debian, and I installed postgres by source. Thanks in advance. ___ Yahoo! Mail - Sempre a melhor opção para você! Experimente já e veja as novidades. http://br.yahoo.com/mailbeta/tudonovo/ ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [ADMIN] Install Tsearch2
yes, you need to just correct the build directory paths... --- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 2/2/07, Achilleas Mantzios <[EMAIL PROTECTED]> wrote: Στις Παρασκευή 02 Φεβρουάριος 2007 14:05, ο/η Alexander B. έγραψε: > I am getting the follow error: > > analise3:/postgres/share/contrib/tsearch2 # make > Makefile:31: ../../src/Makefile.global: No such file or directory > Makefile:32: /contrib/contrib-global.mk: No such file or directory > make: *** No rule to make target `/contrib/contrib-global.mk'. Stop. > analise3:/postgres/share/contrib/tsearch2 # Edit Makefile and correct the top_builddir = ../.. line to your actual postgresql sources. > > > What do I need to do? > Thanks > > Jeff Frost wrote: > > BTW, this was for an RPM installed version of postgresql. As Shoab > > mentions in another mail, if you have compiled postgresql from > > tarball, you'll have to cd into the contrib/tsearch2 dir and make && > > make install before you can do this stuff. > > > > On Thu, 1 Feb 2007, Jeff Frost wrote: > >> I believe I followed these instructions the last time I enabled > >> tsearch2: > >> > >> http://www.sai.msu.su/~megera/wiki/tsearch-v2-intro > >> > >> These are my crib notes for the English version, you'll have to > >> update paths etc: > >> > >> wget > >> http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/dicts/ispell/ispe > >>ll-english.tar.gz > >> > >> cd /usr/local/lib > >> sudo tar xvfz /usr/local/src/TARFILES/ispell-english.tar.gz > >> > >> psql -f /usr/share/pgsql/contrib/tsearch2.sql ftstest > >> > >> INSERT INTO pg_ts_cfg (ts_name , prs_name, locale ) values ( > >> 'default_english', 'default', 'en_US'); > >> > >> INSERT INTO pg_ts_dict > >> (SELECT 'en_ispell', > >> dict_init, > >> 'DictFile="/usr/local/lib/english.dict",' > >> 'AffFile="/usr/local/lib/english.aff",' > >> 'StopFile="/usr/share/pgsql/contrib/english.stop"', > >> dict_lexize > >> FROM pg_ts_dict > >> WHERE dict_name = 'ispell_template'); > >> > >> INSERT INTO pg_ts_cfgmap (ts_name, tok_alias, dict_name) > >> VALUES ('default_english', 'lhword', '{en_ispell,en_stem}'); > >> INSERT INTO pg_ts_cfgmap (ts_name, tok_alias, dict_name) > >> VALUES ('default_english', 'lpart_hword', '{en_ispell,en_stem}'); > >> INSERT INTO pg_ts_cfgmap (ts_name, tok_alias, dict_name) > >> VALUES ('default_english', 'lword', '{en_ispell,en_stem}'); > >> > >> INSERT INTO pg_ts_cfgmap > >> VALUES ('default_english', 'url', '{simple}'); > >> INSERT INTO pg_ts_cfgmap > >> VALUES ('default_english', 'host', '{simple}'); > >> INSERT INTO pg_ts_cfgmap > >> VALUES ('default_english', 'sfloat', '{simple}'); > >> INSERT INTO pg_ts_cfgmap > >> VALUES ('default_english', 'uri', '{simple}'); > >> INSERT INTO pg_ts_cfgmap > >> VALUES ('default_english', 'int', '{simple}'); > >> INSERT INTO pg_ts_cfgmap > >> VALUES ('default_english', 'float', '{simple}'); > >> INSERT INTO pg_ts_cfgmap > >> VALUES ('default_english', 'email', '{simple}'); > >> INSERT INTO pg_ts_cfgmap > >> VALUES ('default_english', 'word', '{simple}'); > >> INSERT INTO pg_ts_cfgmap > >> VALUES ('default_english', 'hword', '{simple}'); > >> INSERT INTO pg_ts_cfgmap > >> VALUES ('default_english', 'nlword', '{simple}'); > >> INSERT INTO pg_ts_cfgmap > >> VALUES ('default_english', 'nlpart_hword', '{simple}'); > >> INSERT INTO pg_ts_cfgmap > >> VALUES ('default_english', 'part_hword', '{simple}'); > >> INSERT INTO pg_ts_cfgmap > >> VALUES ('default_english', 'nlhword', '{simple}'); > >> INSERT INTO pg_ts_cfgmap > >> VALUES ('default_english', 'file', '{simple}'); > >> INSERT INTO pg_ts_cfgmap > >> VALUES ('default_english', 'uint', '{simple}'); &g
Re: [ADMIN] Postgres is not starting or stopping
If you can tell the specific error on start/stop then someone might be able to help you with this... -- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 2/2/07, Manish Pillai <[EMAIL PROTECTED]> wrote: Hi all, I have two production server .One ia already live and the other is going to live soon. In the 1st server i have postgres 7.4.6 and having no problem In the second server we have postgres (PostgreSQL) 7.4.15. Here i have made a mistake. I have removed postgresql.conf and pg_hba.conf and kept the same files from the first server (7.4.6). After this process the postgres is not able to start/stop. Then i removed the two files which i have copied from the first server and copied the the orginal files to the exact place (/var/lib/pgsql/data).But still the database is not able to start or stop.. Please help if u have any idea Regards Manish -- Need a quick answer? Get one in minutes from people who know. Ask your question on Yahoo! Answers<http://answers.yahoo.com/;_ylc=X3oDMTFvbGNhMGE3BF9TAzM5NjU0NTEwOARfcwMzOTY1NDUxMDMEc2VjA21haWxfdGFnbGluZQRzbGsDbWFpbF90YWcx> .
Re: [ADMIN] Postgres is not starting or stopping
Instead of starting with the service, trying starting up with 'postmaster' binary. It can be found in the database server 'bin' folder and that should give a little more detail failure message. To enable logging please go through --> "16.4.5. Error Reporting and Logging" at http://www.postgresql.org/docs/7.4/interactive/runtime-config.html -- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 2/2/07, Manish Pillai <[EMAIL PROTECTED]> wrote: Hi, I am getting only this much... [EMAIL PROTECTED] init.d]# ./postgresql restart Stopping postgresql service: [FAILED] Starting postgresql service: [FAILED] [EMAIL PROTECTED] init.d]# How to enable log in postgres...Please help Regards Manish *Shoaib Mir <[EMAIL PROTECTED]>* wrote: If you can tell the specific error on start/stop then someone might be able to help you with this... -- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 2/2/07, Manish Pillai <[EMAIL PROTECTED]> wrote: > > Hi all, > > > I have two production server .One ia already live and the other is > going to live soon. > > In the 1st server i have postgres 7.4.6 and having no problem > > In the second server we have postgres (PostgreSQL) 7.4.15. Here i have > made a mistake. I have removed postgresql.conf and pg_hba.conf and kept > the same files from the first server (7.4.6). > > > After this process the postgres is not able to start/stop. Then i > removed the two files which i have copied from the first server and copied > the the orginal files to the exact place (/var/lib/pgsql/data).But still the > database is not able to start or stop.. > > > Please help if u have any idea > > Regards > Manish > > -- > Need a quick answer? Get one in minutes from people who know. Ask your > question on Yahoo! Answers > <http://answers.yahoo.com/;_ylc=X3oDMTFvbGNhMGE3BF9TAzM5NjU0NTEwOARfcwMzOTY1NDUxMDMEc2VjA21haWxfdGFnbGluZQRzbGsDbWFpbF90YWcx>. > > -- Want to start your own business? Learn how on Yahoo! Small Business.<http://us.rd.yahoo.com/evt=41244/*http://smallbusiness.yahoo.com/r-index>
Re: [ADMIN] Backup Strategies?
A cron job can always do the job using pg_dump/pg_dumpall, in case you need the incremental backup you can opt for PITR WAL based archive logs... -- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 2/3/07, Joshua Kramer <[EMAIL PROTECTED]> wrote: Hello All, What strategies are people using for automated, script-based backup of databases? There are a few I can think of: 1. Create a "db_backup" unix user and a "db_backup" pgsql user. Grant READ access to all objects on all databases for the "db_backup" pgsql user. Create a .pgpass file in the home directory of the "db_backup" unix user. Backup as needed with a script run as the "db_backup" unix user. 2. Create a "db_backup" unix user and repeat above, except using the "postgres" db user. Thanks, -Josh ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [ADMIN] Terminating a query that appears to be hung
Use SIGTERM for killing only one process. You can also use kill -INT. It sends a query cancel to the backend and will not disturb other processes and the database server. -- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 2/3/07, Robert D Oden <[EMAIL PROTECTED]> wrote: Thank you for the response. We are using Linux and I am able to identify the offending . I was not sure if killing the process was an acceptable and safe practice. Thanks again! Robert Guido Barosio wrote: > Hmm, if you are a sysadmin you may want to get the process id for the > query, running something like: > > > Select procpid,current_query > from pg_stat_activity > where datname="dbname"; > > If you are running Linux, you'll get the procpid (pid) for the > offending query and by sending "kill -15 " you are > killing/terminating the query. > > No idea on how to proceed if your platform is Windows. > > Best wishes, > Guido Barosio > > > On 1/29/07, Phillip Smith <[EMAIL PROTECTED]> wrote: >> Can youo be a bit more specific...? What client? >> >> Psql should terminate the connection when it exist. pgAdmin will >> close it >> when you exit the ap, or if you choose 'Disconnect' from the menu. In >> PHP >> you'll need to call pg_close function... Every 'client' will have a >> different method... >> >> -Original Message- >> From: [EMAIL PROTECTED] >> [mailto:[EMAIL PROTECTED] On Behalf Of Robert D Oden >> Sent: Tuesday, 30 January 2007 2:03 AM >> To: PgSQL ADMIN >> Subject: [ADMIN] Terminating a query that appears to be hung >> >> Is there a way to terminate a client session without having to restart >> Postgres. >> >> Thanks >> >> -- >> >> Robert D Oden >> Database Marketing Technologies, Inc >> 951 Locust Hill Circle >> Belton MO 64012-1786 >> >> Ph: 816-318-8840 >> Fax: 816-318-8841 >> >> [EMAIL PROTECTED] >> >> >> ---(end of broadcast)--- >> TIP 7: You can help support the PostgreSQL project by donating at >> >> http://www.postgresql.org/about/donate >> >> >> ***Confidentiality and Privilege >> Notice*** >> >> The material contained in this message is privileged and confidential to >> the addressee. If you are not the addressee indicated in this >> message or >> responsible for delivery of the message to such person, you may not copy >> or deliver this message to anyone, and you should destroy it and kindly >> notify the sender by reply email. >> >> Information in this message that does not relate to the official >> business >> of Weatherbeeta must be treated as neither given nor endorsed by >> Weatherbeeta. >> Weatherbeeta, its employees, contractors or associates shall not be >> liable >> for direct, indirect or consequential loss arising from transmission >> of this >> message or any attachments >> >> ---(end of broadcast)--- >> TIP 2: Don't 'kill -9' the postmaster >> > > -- Robert D Oden Database Marketing Technologies, Inc 951 Locust Hill Circle Belton MO 64012-1786 Ph: 816-318-8840 Fax: 816-318-8841 [EMAIL PROTECTED] ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] Backup Strategies?
Going with a PostgreSQL backup user (with read only permissions) seems to be a better option for your case Anyone else on the list might be able to come up with much better options. -- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 2/3/07, Phillip Smith <[EMAIL PROTECTED]> wrote: Have you looked at pg_dump and/or pg_dumpall? That would be the more common way of performing backups. I run 2 backup's nightly of our databases: 1) As part of a backup script, call pg_dumpall as the postgres *nix users and PG user, then pipe the output to a file on a remote server. 2) As a cronjob: pg_dumpall > /dev/st0 Cheers, -p On Sat, 2007-02-03 at 01:51 -0500, Joshua Kramer wrote: > Hello All, > > What strategies are people using for automated, script-based backup of > databases? There are a few I can think of: > > 1. Create a "db_backup" unix user and a "db_backup" pgsql user. Grant > READ access to all objects on all databases for the "db_backup" pgsql > user. Create a .pgpass file in the home directory of the "db_backup" unix > user. Backup as needed with a script run as the "db_backup" unix user. > > 2. Create a "db_backup" unix user and repeat above, except using the > "postgres" db user. > > Thanks, > -Josh > > > ---(end of broadcast)--- > TIP 2: Don't 'kill -9' the postmaster > ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [ADMIN] Postgresql driver name
Hmm that should be 'org.postgresql.driver', you can find more details on --> http://jdbc.postgresql.org/ I will recommend you to post such question on the JDBC list in future as you will get a lot more help there... -- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 1/31/07, Seham hammad mohamed <[EMAIL PROTECTED]> wrote: Hi, I am using postgresql 8.1 an I am trying to connect to the postsql from JSP and servlets but I cannot find the driver name for the postgresql so I can use it in the jsp connection. I would appreciate your help. Thanks, Seham. This message has been scanned for content and viruses by the DIT Information Services MailScanner Service and is believed to be clean. http://www.dit.ie ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [ADMIN] Unable to pg_dump 8.1.2 - memory alloc error
Hmmm that looks like to me a HW problem causing data corruption. Try doing a REINDEX for the database, if possible restart the database server machine and then try taking the dump again to see if that works. But I will recommend doing memory tests for the hardware... -- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 1/31/07, airbulb <[EMAIL PROTECTED]> wrote: Hello I get the following error when dumping a pg database with WIN_1252 encoding: pg_dump --clean --no-owner dumpfile | gzip >/home/postgres/globalen.dump.gz 2>>./err.out pg_dump: ERROR: invalid memory alloc request size 4294967293 pg_dump: SQL command to dump the contents of table "site" failed: PQendcopy() failed. pg_dump: Error message from server: ERROR: invalid memory alloc request size 4294967293 pg_dump: The command was: COPY public.site (site_id, url, status, created, "domain", title) TO stdout; Possibly related is a particular query with pgAdmin on same site table with above error yields ERROR: invalid multibyte character for locale I have upgraded from 8.1.2 to 8.1.6 but get the same error. Any help appreciated. Regards, David.
Re: [ADMIN] WAL files backup
What do you mean by indentical here? does it mean that they are same in size, if that is true then yes they should be same in size unless you specify archive_timeout (8.2 config parameter) setting to do a log switch after certain amount of time. -- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 2/12/07, pedro noticioso <[EMAIL PROTECTED]> wrote: hi there I lightly read this http://www.postgresql.org/docs/8.1/static/backup-online.html and am interested in creating an incremental backup of WAL files, but my database is small so each of this WAL files must be almost identical to the previous one. Is there a way to incrementally backup so that it takes less space? thanks! Finding fabulous fares is fun. Let Yahoo! FareChase search your favorite travel sites to find flight and hotel bargains. http://farechase.yahoo.com/promo-generic-14795097 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [ADMIN] pl/pgsql function spikes CPU 100%
Are the stat collector and autovacuum processes in good shape? -- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 3/16/07, Jeff Frost <[EMAIL PROTECTED]> wrote: I've got a client that has a function in a db which had been humming along quite nicely on 2xOpteron 275, PG 8.1.5, 8GB of RAM. Now suddenly many of the functions in the DB if called will spike the CPU to 100%. These are functions that used to finish in 7ms, now run for 20-40 mins. Interestingly, when you strace the backend, it doesn't appear to be doing too much...here's some sample output: select(0, NULL, NULL, NULL, {0, 1000}) = 0 (Timeout) semop(3932217, 0x7fbfffd150, 1) = 0 semop(3932217, 0x7fbfffd150, 1) = 0 semop(3932217, 0x7fbfffd150, 1) = 0 semop(3932217, 0x7fbfffd150, 1) = 0 semop(3932217, 0x7fbfffd150, 1) = 0 select(0, NULL, NULL, NULL, {0, 1000}) = 0 (Timeout) semop(3997755, 0x7fbfffd170, 1) = 0 semop(3932217, 0x7fbfffd150, 1) = 0 Any chance we've stumbled into some corner case bug? We actually moved the DB to a different server thinking perhaps we had gotten to the limit of slow hardware, but in fact it happens on the other server as well. I don't see any ungranted locks in pg_locks, nor are there any other non idle queries this time of the night. I'll see if I can share the function code tomorrow when people are awake again in case we have something especially silly in there. -- Jeff Frost, Owner <[EMAIL PROTECTED]> Frost Consulting, LLC http://www.frostconsultingllc.com/ Phone: 650-780-7908 FAX: 650-649-1954 ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [ADMIN] max_fsm_pages
First of all run the VACUUM FULL ANALYZE and that should hopefully help you get rid of this problem without changing the max_fsm_pages. Other then this I will recommend you to have the autovacuuming process in place for proper database maintenance. -- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 4/1/07, Abu Mushayeed <[EMAIL PROTECTED]> wrote: PG DB: 8.1.3 12 DB in the cluster When I do a full vacuum of the database cluster, at the end I am getting the message as follows: psql:dba_task.sql:1: INFO: free space map contains 8033993 pages in 1023 relations DETAIL: A total of 800 page slots are in use (including overhead). 8201952 page slots are required to track all free space. Current limits are: 800 page slots, 32768 relations, using 49054 KB. psql:dba_task.sql:1: NOTICE: number of page slots needed (8201952) exceeds max_fsm_pages (800) HINT: Consider increasing the configuration parameter "max_fsm_pages" to a value over 8201952. My questions are: 1. With out increasing the max_fsm_pages, can I bring down the need for fsm_pages by deleting unnecessary tables? 2. As, the database grows the need for max_fsm_pages grows too, what is the best practice? 3. As a new postgres DBA, how should one manage this parameter? Thanks Abu Mushayeed -- It's here! Your new message! Get new email alerts<http://us.rd.yahoo.com/evt=49938/*http://tools.search.yahoo.com/toolbar/features/mail/>with the free Yahoo! Toolbar.
Re: [ADMIN] Increasing the shared memory
I guess shared_buffers (in postgresql.conf file) will help you here if you have properly setup your kernel.SHMMAX value. -- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 4/2/07, Sorin N. Ciolofan <[EMAIL PROTECTED]> wrote: Hello! I'd like to ask you if there is any Postgre configuration parameter (like the ones defined in postgresql.conf file) that could be used for increasing the shared memory for Postgre? Thank you very much With best regards, Sorin
Re: [ADMIN] Dumping views, functions, ...
You can use the following query to get all the views: select 'CREATE VIEW '||viewname||' AS '||definition as viewSQL from pg_views where schemaname=''; -- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 4/2/07, Arnau <[EMAIL PROTECTED]> wrote: Hi all, Lets say that in my database I have a view named foo_view and I'd like to dump it's definition. I'd like to do something like: pg_dump --view=foo_view bar_db > foo_view.log so I'd have in the foo_view.log file the foo_view's definition. This same concept I'm searching to apply it to rules, functions, ... I have been looking into pg_dump options and I haven't found the way. The most similar thing I have found is -t option that does what I need but with tables. Is it possible to do this with pg_dump? Thanks -- Arnau ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] Increasing the shared memory
An extract from --> http://www.powerpostgresql.com/PerfList/ might help you shared_buffers: As a reminder: This figure is NOT the total memory PostgreSQL has to work with. It is the block of dedicated memory PostgreSQL uses for active operations, and should be a minority of your total RAM on the machine, since PostgreSQL uses the OS disk cache as well. Unfortunately, the exact amount of shared buffers required is a complex calculation of total RAM, database size, number of connections, and query complexity. Thus it's better to go with some rules of thumb in allocating, and monitor the server (particuarly pg_statio views) to determine adjustments. On dedicated servers, useful values seem to be between between 8MB and 400MB (between 1000 and 50,000 for 8K page size). Factors which raise the desired shared buffers are larger active portions of the database, large complex queries, large numbers of simultaneous queries, long-running procedures or transactions, more available RAM, and faster/more CPUs. And, of course, other applications on the machine. Contrary to some expectations, allocating much too much shared_buffers can actually lower peformance, due time required for scanning. Here's some examples based on anecdotes and TPC tests on Linux machines: * Laptop, Celeron processor, 384MB RAM, 25MB database: 12MB/1500 * Athlon server, 1GB RAM, 10GB decision-support database: 120MB/15000 * Quad PIII server, 4GB RAM, 40GB, 150-connection heavy transaction processing database: 240MB/3 * Quad Xeon server, 8GB RAM, 200GB, 300-connection heavy transaction processing database: 400MB/5 Please note that increasing shared_buffers, and a few other memory parameters, will require you to modify your operating system's System V memory parameters. See the main PostgreSQL documentation for instructions on this. -- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 4/2/07, Sorin N. Ciolofan <[EMAIL PROTECTED]> wrote: Thanks, I've a value of 1000 set for shared_buffers, does this means that I use 8kbX1000=8Mb of Shared Mem? The definition from the manual is quite confusing: shared_buffers (integer) Sets the amount of memory the database server uses for shared memory buffers. The default is typically 32 megabytes (32MB), but may be less if your kernel settings will not support it (as determined during initdb). This setting must be at least 128 kilobytes and at least 16 kilobytes times max_connections<http://www.postgresql.org/docs/current/static/runtime-config-connection.html#GUC-MAX-CONNECTIONS>. What does the integer number represent? Number of shared buffers? If yes, what size does each shared buffer have? "The default is typically 32 megabytes" suggests that this integer could also represent the number of megabytes?!? In the postgresql.conf file is an ambiguous comment that could induce the idea that each shared buffer has 8 kb. So, which is the meaning of this integer? Thanks. S. ------ *From:* Shoaib Mir [mailto:[EMAIL PROTECTED] *Sent:* Monday, April 02, 2007 1:01 PM *To:* Sorin N. Ciolofan *Cc:* pgsql-general@postgresql.org; pgsql-admin@postgresql.org *Subject:* Re: [ADMIN] Increasing the shared memory I guess shared_buffers (in postgresql.conf file) will help you here if you have properly setup your kernel.SHMMAX value. -- Shoaib Mir EnterpriseDB (www.enterprisedb.com ) On 4/2/07, *Sorin N. Ciolofan* <[EMAIL PROTECTED]> wrote: Hello! I'd like to ask you if there is any Postgre configuration parameter (like the ones defined in postgresql.conf file) that could be used for increasing the shared memory for Postgre? Thank you very much With best regards, Sorin
Re: [ADMIN] Windows 8.2.4 installation
It actually is: pg_ctl --help -- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 6/11/07, Abraham, Danny <[EMAIL PROTECTED]> wrote: Hi, I am building a Windows installation by copying all the .exe and .dll files to a single \bin file. I am, then, testing by running "pg_ctl –help" and nothing comes out. The Path is set properly. Thanks Danny Abraham BMC Software CTM&D BU 972-52-4286-513 [EMAIL PROTECTED]
Re: [ADMIN] Logging autovacuum activity
You can also use the following query to check the autovacuum activity: select last_autovacuum, last_autoanalyze from "pg_catalog"."pg_stat_all_tables" -- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 6/20/07, Ireneusz Pluta <[EMAIL PROTECTED]> wrote: Sibte Abbas napisał(a): > On 6/20/07, Ireneusz Pluta <[EMAIL PROTECTED]> wrote: >> Hello, >> >> How can I make sure that autovacuum is running, beyond believing that >> all autovacuum_ settings in >> postgresql.conf are set correctly? >> >> In the 8.1 I could see: >> LOG: autovacuum: processing database "test" >> in the log. >> >> But I can't find it in the 8.2.4. >> >> I could see some developers discussions about making changes to >> logging autovacuum activity, however >> could not track the discussion tight enough to find what the final >> solution is. >> >> Thanks >> >> Ireneusz Pluta >> >> > > ps -ef | grep autovacuum. > It does not show it. I run freebsd, does it matter? Anyway, I found that I may enable logging autovacuum messages by setting: log_min_messages = debug1 in postgresql.conf I found it from postgresql-8.2.4/src/backend/postmaster/autovacuum.c, where I read in line 420: ereport(DEBUG1, (errmsg("autovacuum: processing database \"%s\"", db->name))); and this is what the developers were discussing and I did not understand initially. Sorry for the noise. Ireneusz Pluta ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [ADMIN] replication between linxu postgresql and Windows postgresql
Slony can handle this all. If you do not want to use Slony, you can write your own triggers using dblink module to get the data across. -- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 6/25/07, Mary Anderson <[EMAIL PROTECTED]> wrote: Hi all, I am new to postgresql and have been called upon to set up a database to be used by two groups. One group (mine) is strictly a UNIX shop. The other group (theirs) is strictly a Windows shop. What is the best way to go. First, does postgresql have replication and can i get it with the open source database? Second, would replication work between a Linux postgresql and a Windows postgresql? What if I run postgresql on VMWare in the Windows shop? Will replication work? What about dumping the Linux database and restoring it on the Windows database. Will that work? Opinions on any of this are welcome! Mary Anderson ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [ADMIN] replication between linxu postgresql and Windows postgresql
I didnt meant for the version, what i meant was all on latest. -- Shoaib Mir EnterpriseDB (www.enterprisedb.com) On 6/25/07, Chris Browne <[EMAIL PROTECTED]> wrote: [EMAIL PROTECTED] ("Shoaib Mir") writes: > Slony can handle this all. No it can't. It will not work with versions of PostgreSQL earlier than 7.3.3, because it requires namespace functionality that did not stabilize until that point. -- (reverse (concatenate 'string "moc.enworbbc" "@" "enworbbc")) http://www3.sympatico.ca/cbbrowne/lsf.html Rules of the Evil Overlord #147. "I will classify my lieutenants in three categories: untrusted, trusted, and completely trusted. Promotion to the third category can only be awarded posthumously." <http://www.eviloverlord.com/> ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] Queriy results erratic on Linux
On Feb 8, 2008 3:35 AM, PostgreSql <[EMAIL PROTECTED]> wrote: > Hi > My application runs fine on Windows. > However when I copy the files to the Linux server some queries return no > results or errors even though the records and tables exist! > It is always the same records or tables that are not found! > In case of a table the error is: function.pg-query: Query failed: ERROR: > relation "sites" does not exist in > > Any idea what might cause the problem? > <http://archives.postgresql.org> > First of all query your pg_class catalog table and see if you can see those tables in there like this: *select relname from pg_class where relname = 'tablename';* If its there then check you search_path and see if you have your schema name in there. Other then that I am not sure what do you mean by copying the files from Windows server to Linux as I think you can not use the data files created by initdb on Windows for a Linux server. -- Shoaib Mir Fujitsu Australia Software Technology [EMAIL PROTECTED]
Re: [ADMIN] Character encoding conversion
On Feb 9, 2008 7:28 PM, Mike Blackwell <[EMAIL PROTECTED]> wrote: > I have a database which was originally created with LATIN1 encoding. > I'd like to move it to UTF8. The data will load ok (COPY) but I am > getting 'invalid byte sequence for encoding..." messages when accessing > the data. > > Is there a way to automatically convert the offending characters, or to > easily locate them in a pg_dump file so they can be converted by hand? > > Try using 'iconv' for your dump file and convert it all to UTF8 first before restoring to a UTF8 database. -- Shoaib Mir Fujitsu Australia Software Technology [EMAIL PROTECTED]
Re: [ADMIN] How to reindex when unable to open relation?
On Tue, Feb 26, 2008 at 5:13 PM, Darren Reed <[EMAIL PROTECTED]> wrote: > Inserts into my table started generating an error message (see below), > and there was a log message that suggested I reindex. > > postgres seems to start and stop ok, but no operations on the tables > seem to work. > > l# /usr/pkg/bin/pg_dump -s -U postgres > pg_dump: failed sanity check, parent table OID 16401 of pg_rewrite entry > OID 16403 not found > > Is this recoverable without using a backup? > > Looks like to me corrupted system catalog! I guess when you dropped the table it didn't delete the record from pg_rewrite was not deleted when the parent table was dropped. So you can try doing something: delete from pg_rewrite where oid = 16403; And see if this can fix the problem.. -- Shoaib Mir Fujitsu Australia Software Technology [EMAIL PROTECTED]
Re: [ADMIN] Handling large volumes of data
On Tue, Apr 8, 2008 at 7:42 PM, Johann Spies <[EMAIL PROTECTED]> wrote: > Apparently the best approach is not to have very large tables. I am > thinking of making (as far as the firewall is concerned) a different > table for each day and then drop the older tables as necessary. > > Any advice on how to best handle this kind of setup will be > appreciated. > > Table paritioning is what you need --> http://www.postgresql.org/docs/current/static/ddl-partitioning.html and then I will also advise distribute your tables across different disks through tablespaces. Tweak the shared buffers and work_mem settings as well. -- Shoaib Mir Fujitsu Australia Software Technology [EMAIL PROTECTED]
Re: [ADMIN] how to figure out how long a query takes in the pg log file...
On Wed, Mar 11, 2009 at 10:24 AM, Jessica Richard wrote: > I need to identify the slowest queries by analyzing the postgre log file. > > I will recommend using log_min_duration for that purpose from postgresql.conf file. Another good project I have used for such purpose in the past is pgFouine --> http://pgfouine.projects.postgresql.org -- Shoaib Mir http://shoaibmir.wordpress.com/
Re: [ADMIN] Max connections
On Tue, Aug 11, 2009 at 3:42 PM, Benjamin Krajmalnik wrote: > I am setting up a test environment to simulate a very high load. We have > a server farm which is receiving data (cold be thousands of simultaneous > users posting data). I currently have max_connections set to 500 and the > server is starting ok. If I try to increase the max_connections to 1000, > the server is unable to start. I am running a VM with 4GB RAM. Swap space > is not being used, and the system is showing about 1.5GB of ram not being > utilized. > > Are there any other config setting which should be changed together with > max_connections to successfully start the postmaster? > Increasing max_connection will make PostgreSQL to request more System Vshared memory or semaphores than your operating system's default configuration allow. Did you tune/tweak the kernel options for that? Details for those options can be found at http://www.postgresql.org/docs/8.3/static/kernel-resources.html#SYSVIPC -- Shoaib Mir http://shoaibmir.wordpress.com/
Re: [ADMIN] graceful reload
On Thu, Jun 3, 2010 at 5:48 PM, Vasiliy G Tolstov wrote: > Sorry for incorrect question. I mean: does postgresql can be reloaded > without breaking current connections? > For example: > I need to increate some values in postgresql conf file and want to this > valies applied on running server, can i do this without full restart? > > Which settings you want to change? as there are some which can be done with just a database reload but some of them cant be done without doing a database restart (like some memory related settings) -- Shoaib Mir http://shoaibmir.wordpress.com/
Re: [ADMIN] graceful reload
On Thu, Jun 3, 2010 at 6:01 PM, Vasiliy G Tolstov wrote: > > Memory related: > max_connections > shared_buffers > temp_buffers > max_prepared_transactions > work_mem > maintenance_work_mem > max_stack_depth > max_files_per_process > wal_buffers > > In config file some in of this settings exists comment line like > "changes require restart". This is mean exactly restart or reload can do > this? > > > For these you really need a database restart as reload will not help. -- Shoaib Mir http://shoaibmir.wordpress.com/
Re: [ADMIN] Reg: invalid byte sequence for encoding utf8 0x00
On Wed, Sep 29, 2010 at 7:18 PM, JSPrabu wrote: > Hi Support, > > > > Please help for the following issues which I am facing with migration and > replication process of oracle to EnterpriseDB database. > > > > 1) While migrating tables from oracle to edb database, I am facing > “*invalid > byte sequence for encoding utf8”* error. > > I guess you are posting it on the wrong list as this is not an EnterpriseDB mailing list but its for PostgreSQL related questions so hmm I am not sure if anyone will be able to help you with migration studio here. It will be easy to help if you can let us know of the offending row that is giving the problem? -- Shoaib Mir http://shoaibmir.wordpress.com/
Re: [ADMIN] How to find the primary server of a hot standby server?
On Fri, Oct 15, 2010 at 9:39 AM, Guillaume Lelarge wrote: > Le 14/10/2010 23:18, Rudolf van der Leeden a écrit : > > [...] > > quick question to the PG 9 community. > > > > I can query a hot standby server if recovery is still active using > > the administration function pg_is_in_recovery(). Is there also a way > > to query for the 'primary server', i.e. which server is delivering > > the WAL records? > > > > I'd like to monitor a bunch of standby servers in terms of > > configuration and operation. > > > > There's no "SQL" way to know this, AFAICT. But I would love to be proven > wrong: I would be able to add this info to pgAdmin, which would be great. > > > What about doing a query like: show wal_level; If it shows *hot_standby* then that server should have been setup as master. As an extra step you can also check for archive_mode as well if its enabled or not. -- Shoaib Mir http://shoaibmir.wordpress.com/
Re: [ADMIN] I want to find out who is using a database
On Wed, Nov 10, 2010 at 1:17 PM, Tena Sakai wrote: > > Can somebody please tell me how I find who it is using the > x database? > > select * from pg_stat_activity where datname = ''; -- Shoaib Mir http://shoaibmir.wordpress.com/
Re: [ADMIN] Postgres Crash
On Fri, Dec 10, 2010 at 2:17 PM, Samuel Stearns wrote: > Howdy, > > > > Environment: > > > > Solaris 10 > > Postgres 8.3.12 > > > > Postgres crashed and left 26 postmaster processes active in it’s wake. > Killed the children and re-started postgres successfully. Messages from the > log: > > > > Dec 10 11:52:15 udrv postgres[771]: [ID 748848 local0.info] [6-1] > host=,user=,db= LOG: setsockopt(TCP_NODELAY) failed: Invalid argument > Did ypu try deleting postmaster.pid file and then restarting?? -- Shoaib Mir http://shoaibmir.wordpress.com/
Re: [ADMIN] Postgres Crash
On Fri, Dec 10, 2010 at 2:33 PM, Samuel Stearns wrote: > Thanks Tom and Shoaib, > > > > Shoaib, I did not delete postmaster.pid. I killed the children and > re-started successfully. > > > So is the database server all good and working fine now?? -- Shoaib Mir http://shoaibmir.wordpress.com/