Hi all, greetings!
Using Oracle Heterogeneous Services (Oracle HS) I have configured/created a DB
link from Postgres 9.3 database into Oracle 11gR3 database (with postgres DB
user credentials).
SQL> create public database link pg_link connect to "postgres" identified by
"blahblah"
using 'pos
Did U perform any vacuumdb / reindexdb before the Pg_dump?
El 01/10/2013 09:49, Magnus Hagander escribió:
> On Tue, Oct 1, 2013 at 11:07 AM, Sergey Klochkov wrote:
>> Hello All,
>>
>> While trying to backup a database of relatively modest size (160 Gb) I ran
>> into the following issue:
>>
>> W
Running Postgres 9.2.4 on centos 6
We have a backup script that runs twice weekly. At the end of the script it
executes a pg_archivecleanup to remove the old WAL files no longer needed.
Most of the time this runs as expected. But for some reason, it does not
cleanup ALL of the old wal files.
I n
Would it be possible to prevent the system from forgetting the stored
password when pgAdmin fails to connect in a future release or have an
option that would prevent forgetting of passwords?
I use pgAdmin 1.16 or 1.18 to connect to our database and use the store
password option. If postgres is no
Viktor wrote:
> We are experiencing database random overloads caused by IDLE processes.
> Their count jumps from normal ~70 connections to 250-300 with high I/O
> (30-40% wa, when normal ~ 1 % wa).
>
> The overload isn't long and lasts about 5 -10 minutes just a couple of
> times during the month.
Hello,
We are experiencing database random overloads caused by IDLE processes.
Their count jumps from normal ~70 connections to 250-300 with high I/O
(30-40% wa, when normal ~ 1 % wa).
The overload isn't long and lasts about 5 -10 minutes just a couple of
times during the month.
Please sug
> -Original Message-
> From: pgsql-admin-ow...@postgresql.org [mailto:pgsql-admin-
> ow...@postgresql.org] On Behalf Of Viktor
> Sent: Tuesday, October 01, 2013 9:19 AM
> To: pgsql-admin@postgresql.org
> Subject: [ADMIN] Random server overload
>
> Hello,
>
> We are experiencing database r
On Tue, Oct 1, 2013 at 11:07 AM, Sergey Klochkov wrote:
> Hello All,
>
> While trying to backup a database of relatively modest size (160 Gb) I ran
> into the following issue:
>
> When I run
> $ pg_dump -f /path/to/mydb.dmp -C -Z 9 mydb
>
> File /path/to/mydb.dmp does not appear (yes, I've checked
On Tue, Oct 1, 2013 at 4:01 AM, Giuseppe Broccolo <
giuseppe.brocc...@2ndquadrant.it> wrote:
> Maybe you can performe your database changing some parameters properly:
>
> max_connections = 500 # (change requires restart)
>>
> Set it to 100, the highest value supported by PostgreS
No, it did not make any difference. And after looking through pg_dump.c
and pg_dump_sort.c, I cannot tell how it possibly could. See the
stacktrace that I've sent to the list.
Thanks.
On 01.10.2013 15:01, Giuseppe Broccolo wrote:
Maybe you can performe your database changing some parameters p
Maybe you can performe your database changing some parameters properly:
PostgreSQL configuration:
listen_addresses = '*' # what IP address(es) to listen on;
port = 5432 # (change requires restart)
max_connections = 500 # (change requires re
Stack trace:
Thread 1 (Thread 0x7ff72c4c97c0 (LWP 13086)):
#0 removeHeapElement (objs=0x1a0c90630, numObjs=,
preBoundaryId=, postBoundaryId=out>) at pg_dump_sort.c:502
#1 TopoSort (objs=0x1a0c90630, numObjs=,
preBoundaryId=, postBoundaryId=out>) at pg_dump_sort.c:415
#2 sortDumpableObjects (
I've upgraded to 9.2.4. The problem still persists. It consumed 10 Gb of
RAM in 5 minutes and still grows. The dump file did not appear.
On 01.10.2013 14:04, Jov wrote:
Try update to the latest release,I see there is a bug fix about pg_dump
out of memroy in 9.2.2,from the release note
http://ww
Try update to the latest release,I see there is a bug fix about pg_dump out
of memroy in 9.2.2,from the release note
http://www.postgresql.org/docs/devel/static/release-9-2-2.html:
-
Work around unportable behavior of malloc(0) and realloc(NULL, 0) (Tom
Lane)
On platforms where these
Hello All,
While trying to backup a database of relatively modest size (160 Gb) I
ran into the following issue:
When I run
$ pg_dump -f /path/to/mydb.dmp -C -Z 9 mydb
File /path/to/mydb.dmp does not appear (yes, I've checked permissions
and so on). pg_dump just begins to consume memory until
15 matches
Mail list logo