Hello,
I am about to upgrade from 7.3.4 to 8.0.3, and I read that using a DB
replication tool is a good way to go about it.
I see there are multiple replicator choices:
Slony-I
Daffodil Replicator
Mammoth Replicator
I was wondering if anyone has tried any 2 or 3 of them, and could share
th
Hello,
Thanks for the explanation :) I thought there was some technical
limitation when dump/reload is used, and I just wasn't seeing it after
my dump/reload successfully got me from 7.3.4 to 8.0.3.
Still, DB replication sounds very useful, so I'd still be interested in
Slony-I, Daffodil, and Ma
Hello,
I followed this advice for picking a good effective_cache_size value
(below) from Scott Marlowe, and run into a bit of trouble:
I looked at the `top' output and saw "721380k cached".
So I calculated the effective cache size using Scott's formula:
721380/8 = 90172
Then I changed my effe
Hello,
I have a dump (non-binary, if it matters) of a DB that has some characters in
it that my DB doesn't want to take.
I'm using PG 8.0.3 and it was created with Unicode support:
=> \encoding
UNICODE
Characters that cause problems during the import are things like:
é and other characters from
Thanks John and Ivo for help.
It turned out that I had to manually SET CLIENT_ENCODING TO 'LATIN1' before
processing the dump (which didn't have this specified). This fixed the problem.
I thought a DB set to UNICODE char encoding (server_encoding) would process the
Extended ASCII characters, bu
Hi,
I just renamed some of my tables, but I now have sequences with older names.
I followed info from
http://www.postgresql.org/docs/current/static/sql-altersequence.html :
"Some variants of ALTER TABLE can be used withsequences as well; for
example, to rename a sequence use ALTERT
Thanks Andy, this was it:
ALTER TABLE watchlist ALTER COLUMN id SET DEFAULT
nextval('public.watchlist_id_seq'::text);
Otis
- Original Message
From: Andy Shellam <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]; pgsql-admin@postgresql.org
Cc: pgsql-admin@postgresql.org
Sent: Saturday, March 25
Hi,
I'm running PG 8.0.3. I'm trying to catch slow queries, so I have this in
postgresql.conf:
# log queries that take more than 500 ms
log_min_duration_statement = 500 # in ms
This does log _some_ queries that take > 500 ms to run.
However, it looks like not all queries get logged!
Hi Chris,
I'm pretty sure (I do no timing on the Hibernate/java/app side). I changed the
config to:
# log queries that take more than 1 ms
log_min_duration_statement = 1 # in ms
Still nothing in the log :) (I did ctl_reload the postmaster)
Simon Riggs confirmed this is a known bug in a
Interesting, Hubert.
Where/how do you get the data for:
- number of transactions per second
- duration of longest query
?
Thanks,
Otis
- Original Message
From: hubert depesz lubaczewski
To: Marc G. Fournier
Cc: pgsql-admin@postgresql.org
Sent: Sunday, June 11, 2006 11:10:2
Hello,
Occasionally I see that my (web) app leaves some DB connections open,
so they look like this to `ps':
postgres: username dbname 127.0.0.1 idle in transaction
This results in my DB connection pool getting exhausted every so often.
I need to track the source of this problem.
Is there a w
Hello,
I think pg_stat_activity table may show me what I need.
However, even though I have 'stats_command_string = true' property in
postgresql.conf (and I restarted postmaster), I do not see the
'current_query' in pg_stat_activity table:
simpydb=> select * from pg_stat_activity ;
datid | datna
Hello,
I'm having trouble with pg_dump and pg_restore (PG 7.3.4).
This is how I'm dumping my DB:
# pg_dump -d mydb --clean --inserts --column-inserts --format=P -v -h
localhost -p 5432 -U otis > dbdump
# gzip -9 dbdump + scp it to a remote machine where I want to restore
this dump in a differen
Hello,
I'm using PG 7.3.4 under RedHat 9.0 and I've started noticing the
following type of errors:
cannot open segment 1 of relation url (target block 537329664): No
such file or directory
How can I determine what exactly is corrupt or missing and which row is
triggering this?
I have found pg
Hello,
--- Tom Lane <[EMAIL PROTECTED]> wrote:
> <[EMAIL PROTECTED]> writes:
> > I'm using PG 7.3.4 under RedHat 9.0 and I've started noticing the
> > following type of errors:
> > cannot open segment 1 of relation url (target block 537329664):
> No
> > such file or directory
>
> If this is co
Hello,
I was using 7.3.4 and decided to upgrade to 7.3.8. I did this via RPMs
for RH9. RPM installed OK, and the migration of data looks like it
worked.
I have one remaining problem: I can start the DB with `pg_ctl -D
/var/lib/pgsql/data start', but I cannot start it with
`/etc/rc.d/init.d/pos
Hello,
--- Tom Lane <[EMAIL PROTECTED]> wrote:
> <[EMAIL PROTECTED]> writes:
> > Although I upgraded to 7.3.8, I saw several incorrect(?) references
> to
> > 7.4 (e.g. PGVERSION=7.4 in that init script). Mistakes?
>
> Yes ... yours. You may have thought you updated to 7.3.8, but you
> evidentl
Aha! :)
No problem Devrim - I did this last night and everything worked. The
init script must be from the 7.4.* versions, because there are also
changelog-type messages mentioning 7.4.* in there.
Otis
--- Devrim GUNDUZ <[EMAIL PROTECTED]> wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
--- Tom Lane <[EMAIL PROTECTED]> wrote:
> Peter Eisentraut <[EMAIL PROTECTED]> writes:
> > Iain wrote:
> >> Is there a "standard" directory that people tend to use for this,
> >> such as /var/local/pgsql/ ?
>
> > According to the Filesystem Hierarchy Standard, program data should
> be
> > under
Hello,
I have a DB with about 30 tables, where 2 tables are significantly
larger than the rest, and contain a bit over 100,000 rows.
Every night I do these 3 things:
VACUUM;
ANALYZE;
pg_dump
I am noticing that the VACUUM part takes nearly 30 minutes, during
which the DB is not very accessible (a
Hello,
I still have this issue of a long vacuum process on a DB that is
not really all that big in schema (~ 30 tables) nor size (biggest table
is ~150K rows, another ~120K, all others only a few thousand rows
each).
VACUUMing this DB takes about 30 minutes, and during that time the DB
is pre
Hello,
--- Tom Lane <[EMAIL PROTECTED]> wrote:
> <[EMAIL PROTECTED]> writes:
> > VACUUMing this DB takes about 30 minutes, and during that time the
> DB
> > is pretty unresponsive, although the PG process is not using a lot
> of
> > CPU (load ~ 1) nor memory (~20MB for the VACUUM process).
>
> H
Hello,
So I run VACUUM VERBOSE (just like that, without specifying a table)
and got some output. The 2 big tables take 99% of the vacuuming time.
Now, I run VACUUM religiously every night, across all tables, but maybe
that's an overkill for th number of updates and inserts in this DB.
Maybe s
Hello,
I have a fairly large DB to dump and restore as fast as possible. I'm moving
from 8.0.3 to 8.2.3! :)
I normally dump with these options:
-d MyDB --clean --inserts --column-inserts --format=P
But the last time I tried that, the restore took foreever. So I'm
looking for t
- Original Message
From: Tom Lane <[EMAIL PROTECTED]>
[EMAIL PROTECTED] writes:
> I normally dump with these options:
> -d MyDB --clean --inserts --column-inserts --format=P
> But the last time I tried that, the restore took foreever.
--inserts is pretty expensive.
OG: rig
Hi,
Yes, In remember discussions about (f)sync config. Can anyone comment on
whether turning fsync off for a restore into 8.2.3:
1) is advisable
2) will make the restore faster
If the OS and FS matter, this is on a Fedora Core3 Linux with kernel 2.6.9 and
the ext3 journaling FS.
Thanks,
Otis
Hi,
I've got PG 8.2.3 running on a dual-core P4, and I noticed that, at least
during reindexing, postgreSQL REINDEX process is using only 1 of those 2 cores.
You can clearly see this from this snippet from top:
Cpu0 : 93.3% us, 6.7% sy, 0.0% ni, 0.0% id, 0.0% wa, 0.0% hi, 0.0% si
Cpu1
Hi,
A perfectly normal and healthy PG 8.2.3 instance I have started acting funny -
a query that normally takes only a few milliseconds sometimes runs for minutes.
I thought maybe something's funny with indices, so I reindexed them all.
Didn't help. Thought maybe something's funky with the who
28 matches
Mail list logo