Hi all, greetings!
Using Oracle Heterogeneous Services (Oracle HS) I have configured/created a DB
link from Postgres 9.3 database into Oracle 11gR3 database (with postgres DB
user credentials).
SQL create public database link pg_link connect to postgres identified by
blahblah
using 'postgresql';
Since Postgres does not support public synonyms across databases in a cluster,
how do I connect to a specific database and query a specific table in this
Postgres cluster using the HS DB link?
Let's say, if I have 2 Postgres databases named pgdb01 and pgdb02 in the
Postgres cluster, using this DB link that I have created in Oracle, how can I
query a specific table called table01 from pgdb01 database?
Even though the table user_account exists in pgdb01 database, I cannot select
from it using the DB link.
SQL select count(*) from mailto:%22user_account%22@pg_link;
select count(*) from user_account@pg_link; *
ERROR at line 1:
ORA-28500: connection from ORACLE to a non-Oracle system returned this message:
ERROR: relation user_account does not exist at character 21;
No query has been executed with that handle {HY000,NativeErr = 1}
ORA-02063: preceding 3 lines from PG_LINK;
I tried dbname.tablename syntax, but it didn't work! BTW, all my tables belong
to public schema.
Does anyone with DB link expertise try to answer my question?
Thanks,
Bhanu M. Gandikota
Mobile: (415) 420-7740
From: Alejandro Brust alejand...@pasteleros.org.ar
To: pgsql-admin@postgresql.org
Sent: Tuesday, October 1, 2013 12:30 PM
Subject: Re: [ADMIN] PostgreSQL 9.2 - pg_dump out of memory when backuping a
database with 3 large objects
Did U perform any vacuumdb / reindexdb before the Pg_dump?
El 01/10/2013 09:49, Magnus Hagander escribió:
On Tue, Oct 1, 2013 at 11:07 AM, Sergey Klochkov kloch...@iqbuzz.ru wrote:
Hello All,
While trying to backup a database of relatively modest size (160 Gb) I ran
into the following issue:
When I run
$ pg_dump -f /path/to/mydb.dmp -C -Z 9 mydb
File /path/to/mydb.dmp does not appear (yes, I've checked permissions and so
on). pg_dump just begins to consume memory until it eats up all avaliable
RAM (96 Gb total on server, 64 Gb available) and is killed by the oom
killer.
According to pg_stat_activity, pg_dump runs the following query
SELECT oid, (SELECT rolname FROM pg_catalog.pg_roles WHERE oid = lomowner)
AS rolname, lomacl FROM pg_largeobject_metadata
until it is killed.
strace shows that pg_dump is constantly reading a large amount of data from
a UNIX socket. I suspect that it is the result of the above query.
There are 3 large objects in the database. Please don't ask me why.
I tried googling on this, and found mentions of pg_dump being killed by oom
killer, but I failed to find anything related to the huge large objects
number.
Is there any method of working around this issue?
I think this problem comes from the fact that pg_dump treats each
large object as it's own item. See getBlobs() which allocates a
BlobInfo struct for each LO (and a DumpableObject if there are any,
but that's just one).
I assume the query (from that file):
SELECT oid, lomacl FROM pg_largeobject_metadata
returns 3 rows, which are then looped over?
I ran into a similar issue a few years ago with a client using a
32-bit version of pg_dump, and got it worked around by moving to
64-bit. Did unfortunately not have time to look at the underlying
issue.
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin