[ADMIN] archive_log command...

2006-06-21 Thread Mario Splivalo
Can I force archiving of the WAL files using CHECKPOINT statement? My checkpoing_segments is set to 32, but still archive_command is beeing called up only every 45 minutes or so. Is there a way to speed this up a bit? (I'd love to have it around 10 minutes, so I can have almost-live backed-up serve

Re: [ADMIN] [GENERAL] DocBook 4.2 detecting at configure time

2006-06-21 Thread Oleg Golovanov
Of cause I had probed it already - with command: SGML_CATALOG_FILES=/usr/local/share/sgml/docbook/4.2/docbook.cat ./configure --prefix=/usr/local/pgsql --enable-depend --enable-nls --enable-integer-datetimes --with-openssl --with-pam --enable-thread-safety --with-includes=/usr/local/include --

Re: [ADMIN] "UNICODE" error during restoration

2006-06-21 Thread Thusitha Kodikara
Hello,We did some more testing and managed to get the dump restored on 7.4.10. Then took a backup and tried to restore on to 7.4.13, but if failed again giving the same errors.In addition to this we took a dump of an existing DB on 7.4.13 and tried to restore it onto 7.4.13 itself. This also failed

[ADMIN] Dump size bigger than pgdata size?

2006-06-21 Thread Nicola Mauri
[sorry if this was previously asked: list searches seem to be down] I'm using pg_dump to take a full backup of my database using a compressed format:      $  pg_dump  -Fc  my_db > /backup/my_db.dmp It produces a 6 GB file whereas the pgdata uses only 5 GB of disk space:      $ ls -l /backup    

Re: [ADMIN] Unique constraint and index-combination feature in 8.1

2006-06-21 Thread Aaron Bono
Are you saying you want to create a unique constraint across the indexes of 2 or more columns rather than across the columns themselves?-AaronOn 6/20/06, ow <[EMAIL PROTECTED]> wrote: Hi,Is it somehow possible to create a UNIQUE constraint that relies on non-uniqueONE-column indexes and uses index

Re: [ADMIN] Dump size bigger than pgdata size?

2006-06-21 Thread Aaron Bono
I would dare guess, and it seems you suspect as well, that the binary data is why you are not getting very good compression.You may try dumping the tables individually with --table= table to see which tables are taking the most space in your dump.  Once you find out which tables are taking the most

Re: [ADMIN] Dump size bigger than pgdata size?

2006-06-21 Thread alex.cotarlan
It might happen because of the type of data you have ( binary images). The compression for binary files is notorious horrible since there is a small chance of occurrence of  same chars In other words it is possible since during compression there are additional chars added for checksums and

[ADMIN] strange fsm issues

2006-06-21 Thread Jeff Frost
The DB with the large objects that I had trouble dumping two weeks ago is now exhibiting some interesting fsm issues. The DB stores lots of large objects used for medical research statistics and the data is generally input during the day (9am-3pm pacific time) and evening (7pm-10pm pacific time

Re: [ADMIN] "UNICODE" error during restoration

2006-06-21 Thread Ivo Rossacher
In the HISTORY file comming with the source code there are several modifications noted related to characterset handling in general and to unicode more specificaly in the versions before and 7.4.13 it self. The bottom line is that postgresql in earlier versions did allow incorrect UNICODE sequen

Re: [ADMIN] Dump size bigger than pgdata size?

2006-06-21 Thread Tom Lane
"Nicola Mauri" <[EMAIL PROTECTED]> writes: > I'm using pg_dump to take a full backup of my database using a compressed > format: > $ pg_dump -Fc my_db > /backup/my_db.dmp > It produces a 6 GB file whereas the pgdata uses only 5 GB of disk space: > ... > Database contains about one-hundred

Re: [ADMIN] strange fsm issues

2006-06-21 Thread Jeff Frost
On Wed, 21 Jun 2006, Jeff Frost wrote: The DB with the large objects that I had trouble dumping two weeks ago is now exhibiting some interesting fsm issues. The DB stores lots of large objects used for medical research statistics and the data is generally input during the day (9am-3pm pacific

Re: [ADMIN] archive_log command...

2006-06-21 Thread Bruce Momjian
Mario Splivalo wrote: > Can I force archiving of the WAL files using CHECKPOINT statement? My > checkpoing_segments is set to 32, but still archive_command is beeing > called up only every 45 minutes or so. Is there a way to speed this up a > bit? (I'd love to have it around 10 minutes, so I can ha

[ADMIN] Win32 2003 Front end

2006-06-21 Thread Farrell,Bob
Trying to connect from an ASP.NET front end to a Postgresql 814 backend.   ODBC is connecting but get this error on some pages:   ADODB.Field error '80020009' Either BOF or EOF is True, or the current record has been deleted. Requested operation requires a current record. /index.asp

[ADMIN] 2,2gb of pg_xlog ??

2006-06-21 Thread Stefan . Schmidt
Hi,   I'm new to pgsql, so please be kind ;-)   ok here my problem:   I have a linux box with Debian Woody and pgsql 7.2.   The problem is in the pg_xlog hold about 137 files with 16 MB each, the oldest file is from 2005.   I read the WAL docu but as far as I'm understandig it, it should be auto

[ADMIN] clustering takes too long!

2006-06-21 Thread Ravindra Guravannavar
Hi, I am trying to cluster an index on the TPC-H lineitem table having 6 million records. The index is on a single column l_suppkey.  Postgres (8.1.3) is not able to finish the operation even after 2 hours! The disk is being accessed continuously and CPU is at 2%. Postmaster is started with the f