Carlos Henrique Reimer writes:
> Extracted ulimits values from postmaster pid and they look as expected:
> [root@2-NfseNet ~]# cat /proc/2992/limits
> Limit Soft Limit Hard Limit
> Units
> Max address space 102400 unlimited
> bytes
So you'v
So if you watch processes running with sort by memory turned on in top
or htop can you see your machine running out of memory etc? You have
enough swap if needed? 48G is pretty small for a modern pgsql server
with as much data and tables as you have, so I'd assume you have
plenty of swap just in ca
Extracted ulimits values from postmaster pid and they look as expected:
[root@2-NfseNet ~]# ps -ef | grep /postgres
postgres 2992 1 1 Nov30 ?03:17:46
/usr/local/pgsql/bin/postgres -D /database/dbcluster
root 26694 1319 0 18:19 pts/000:00:00 grep /postgres
[root@2-N
Carlos Henrique Reimer writes:
> Yes, all lines of /etc/security/limits.conf are commented out and session
> ulimit -a indicates the defaults are being used:
I would not trust "ulimit -a" executed in an interactive shell to be
representative of the environment in which daemons are launched ...
ha
Yes, all lines of /etc/security/limits.conf are commented out and session
ulimit -a indicates the defaults are being used:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pen
On Thu, Dec 11, 2014 at 12:05 PM, Carlos Henrique Reimer
wrote:
> That was exactly what the process was doing and the out of memory error
> happened while one of the merges to set 1 was being executed.
You sure you don't have a ulimit getting in the way?
--
Sent via pgsql-general mailing list
That was exactly what the process was doing and the out of memory error
happened while one of the merges to set 1 was being executed.
On Thu, Dec 11, 2014 at 4:42 PM, Vick Khera wrote:
>
> On Thu, Dec 11, 2014 at 10:30 AM, Tom Lane wrote:
>
>> needed to hold relcache entries for all 23000 table
On Thu, Dec 11, 2014 at 10:30 AM, Tom Lane wrote:
> needed to hold relcache entries for all 23000 tables :-(. If so there
> may not be any easy way around it, except perhaps replicating subsets
> of the tables. Unless you can boost the memory available to the backend
>
I'd suggest this. Break
Slony version is 2.2.3
On Thu, Dec 11, 2014 at 3:29 PM, Scott Marlowe
wrote:
> Just wondering what slony version you're using?
>
--
Reimer
47-3347-1724 47-9183-0547 msn: carlos.rei...@opendb.com.br
Just wondering what slony version you're using?
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hi,
Yes, I agree, 8.3 is out of support for a long time and this is the reason
we are trying to migrate to 9.3 using SLONY to minimize downtime.
I eliminated the possibility of data corruption as the limit/offset
technique indicated different rows each time it was executed. Actually, the
failure
Carlos Henrique Reimer writes:
> I've facing an out of memory condition after running SLONY several hours to
> get a 1TB database with about 23,000 tables replicated. The error occurs
> after about 50% of the tables were replicated.
I'd try bringing this up with the Slony crew.
> I guess postgre
Hi,
I've facing an out of memory condition after running SLONY several hours to
get a 1TB database with about 23,000 tables replicated. The error occurs
after about 50% of the tables were replicated.
Most of the 48GB memory is being used for file system cache but for some
reason the initial copy
13 matches
Mail list logo