Can I force archiving of the WAL files using CHECKPOINT statement? My
checkpoing_segments is set to 32, but still archive_command is beeing
called up only every 45 minutes or so. Is there a way to speed this up a
bit? (I'd love to have it around 10 minutes, so I can have almost-live
backed-up serve
Of cause I had probed it already - with command:
SGML_CATALOG_FILES=/usr/local/share/sgml/docbook/4.2/docbook.cat
./configure --prefix=/usr/local/pgsql --enable-depend --enable-nls
--enable-integer-datetimes --with-openssl --with-pam
--enable-thread-safety --with-includes=/usr/local/include
--
Hello,We did some more testing and managed to get the dump restored on 7.4.10. Then took a backup and tried to restore on to 7.4.13, but if failed again giving the same errors.In addition to this we took a dump of an existing DB on 7.4.13 and tried to restore it onto 7.4.13 itself. This also failed
[sorry if this was previously asked:
list searches seem to be down]
I'm using pg_dump to take a full backup
of my database using a compressed format:
$ pg_dump
-Fc my_db > /backup/my_db.dmp
It produces a 6 GB file whereas the
pgdata uses only 5 GB of disk space:
$ ls -l /backup
Are you saying you want to create a unique constraint across the indexes of 2 or more columns rather than across the columns themselves?-AaronOn 6/20/06,
ow <[EMAIL PROTECTED]> wrote:
Hi,Is it somehow possible to create a UNIQUE constraint that relies on non-uniqueONE-column indexes and uses index
I would dare guess, and it seems you suspect as well, that the binary data is why you are not getting very good compression.You may try dumping the tables individually with --table=
table
to see which tables are taking the most space in your dump. Once you find out which tables are taking the most
It might happen because of the type of
data you have ( binary images). The compression for binary files is notorious horrible
since there is a small chance of occurrence of same chars
In other words it is possible since during
compression there are additional chars added for checksums and
The DB with the large objects that I had trouble dumping two weeks ago is now
exhibiting some interesting fsm issues. The DB stores lots of large objects
used for medical research statistics and the data is generally input during
the day (9am-3pm pacific time) and evening (7pm-10pm pacific time
In the HISTORY file comming with the source code there are several
modifications noted related to characterset handling in general and to
unicode more specificaly in the versions before and 7.4.13 it self. The
bottom line is that postgresql in earlier versions did allow incorrect
UNICODE sequen
"Nicola Mauri" <[EMAIL PROTECTED]> writes:
> I'm using pg_dump to take a full backup of my database using a compressed
> format:
> $ pg_dump -Fc my_db > /backup/my_db.dmp
> It produces a 6 GB file whereas the pgdata uses only 5 GB of disk space:
> ...
> Database contains about one-hundred
On Wed, 21 Jun 2006, Jeff Frost wrote:
The DB with the large objects that I had trouble dumping two weeks ago is now
exhibiting some interesting fsm issues. The DB stores lots of large objects
used for medical research statistics and the data is generally input during
the day (9am-3pm pacific
Mario Splivalo wrote:
> Can I force archiving of the WAL files using CHECKPOINT statement? My
> checkpoing_segments is set to 32, but still archive_command is beeing
> called up only every 45 minutes or so. Is there a way to speed this up a
> bit? (I'd love to have it around 10 minutes, so I can ha
Trying to connect from an ASP.NET front end to a Postgresql
814 backend.
ODBC is connecting but get this error on some pages:
ADODB.Field error '80020009'
Either BOF or EOF is True, or the current
record has been deleted. Requested operation requires a current record.
/index.asp
Hi,
I'm new to pgsql, so please be kind ;-)
ok here my problem:
I have a linux box with Debian Woody and pgsql 7.2.
The problem is in the pg_xlog hold about 137 files with 16 MB each, the
oldest file is from 2005.
I read the WAL docu but as far as I'm understandig it, it should be auto
Hi,
I am trying to cluster an index on the TPC-H lineitem table having
6 million records. The index is on a single column l_suppkey. Postgres
(8.1.3) is not able to finish the operation even after 2 hours! The disk
is being accessed continuously and CPU is at 2%. Postmaster is
started with the f
15 matches
Mail list logo