On Aug 25, 2013, at 2:04 AM, Dzmitry wrote:
> Ok,thank you, it's a good point. Need to review & make fixes with what I
> have.
Yes, as suggested I would try just reducing first.
Where connection pooling could help is if your load across all those rails apps
is not spread out evenly, but varie
Ok,thank you, it's a good point. Need to review & make fixes with what I
have.
Thanks,
Dzmitry
On 8/24/13 6:14 PM, "Scott Ribe" wrote:
>On Aug 19, 2013, at 9:55 AM, Dzmitry wrote:
>
>> No, I am not using pgbouncer, I am using pgpool.
>>
>> Total - I have 440 connections to postgres(I h
On Aug 19, 2013, at 9:55 AM, Dzmitry wrote:
> No, I am not using pgbouncer, I am using pgpool.
>
> Total - I have 440 connections to postgres(I have rails application
> running on some servers - each application setup 60 connections to DB and
> keep if forever(until will not be killed), also I h
No, I am not using pgbouncer, I am using pgpool.
Total - I have 440 connections to postgres(I have rails application
running on some servers - each application setup 60 connections to DB and
keep if forever(until will not be killed), also I have some machines that
do background processing, that ke
On Aug 19, 2013, at 9:22 AM, Dzmitry wrote:
> I am using pgpool to balance load to slave servers.
So, to be clear, you're not using pgbouncer after all?
--
Scott Ribe
scott_r...@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice
--
Sent via pgsql-admin mailing list (pgsq
On Aug 19, 2013, at 9:21 AM, Dzmitry wrote:
> I have 2 machines, each machine run 1 process, with 20 threads in each -
> total 40 threads. Every thread connected to postgres and insert data. I am
> using technology, that keep connection opens. It means every thread open 1
> connection when it sta
Sorry for confusion.
I have 2 machines, each machine run 1 process, with 20 threads in each -
total 40 threads. Every thread connected to postgres and insert data. I am
using technology, that keep connection opens. It means every thread open 1
connection when it starts, and will closed it - only wh
On Aug 19, 2013, at 9:07 AM, Dzmitry wrote:
> Yes- I need so many threads...
This is not at all clear from your answer. At most, 1 thread per logical core
can execute at any one time. And while many can be waiting on IO instead of
executing, there is still a rather low limit to how many thread
Yes- I need so many threads, every night I need load jobs from xml to DB.
I need do it as fast as I can, currently it take 4h to load all of
them(around 100 jobs).
CPU/IO wait/percent is about 25%. Do you know how can I check other params
related to IO wait ?
Thanks,
Dzmitry
On 8/19/13
I am already using pgpool, is it bad have 550 connections, is it much ?
Thank you guys for all your help.
Thanks,
Dzmitry
On 8/19/13 4:43 PM, "Scott Ribe" wrote:
>On Aug 19, 2013, at 7:23 AM, Stéphane Schildknecht
> wrote:
>
>> As Laurenz said, you should have a look at documentation.
>>
Am 19.08.2013 10:07, schrieb Dzmitry:
> I am doing a lot of writes to DB in 40 different threads – so every
> thread check if record exists – if not => insert record, if exists =>
> update record.
> During this update, my disk IO almost always – 100% and sometimes it
> crash my DB with following m
On Aug 19, 2013, at 7:23 AM, Stéphane Schildknecht
wrote:
> As Laurenz said, you should have a look at documentation.
>
> It explains how you can lower the risk OOMKiller kills your PostgreSQL
> processes.
> 1. You can set vm.overcommit_memory to 2 in sysctl.conf
> 2. You can adjust the value
Le 19/08/2013 15:06, Dzmitry a écrit :
Thank you guys ! Found in logs(db-slave1 is my replica that use streaming
replication):
Aug 18 15:49:38 db-slave1 kernel: [25094456.525703] postgres invoked
oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0
Aug 18 15:49:38 db-slave1 kernel:
Thank you guys ! Found in logs(db-slave1 is my replica that use streaming
replication):
Aug 18 15:49:38 db-slave1 kernel: [25094456.525703] postgres invoked
oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0
Aug 18 15:49:38 db-slave1 kernel: [25094456.525708] postgres cpuset=/
mems
>> Dzmitry wrote:
>>> On 8/19/13 11:36 AM, "Stéphane Schildknecht"
>>> wrote:
Le 19/08/2013 10:07, Dzmitry a écrit :
> I have postgres server running on ubuntu 12,Intel Xeon 8 CPUs 29 GB
>RAM.
> With following settings:
> max_connections = 550
> shared_buffers = 12GB
>
Hi !
Since Maverick Ubuntu developers disabled logging to /var/log/messages
by default.
You should check /var/log/syslog instead.
--
Mael
2013/8/19 Dzmitry :
> Do you mean postgres log file(in postgres.conf)
>
> log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'
> log_min_messages = warning
>
> Or
Do you mean postgres log file(in postgres.conf)
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'
log_min_messages = warning
Or /var/log/messages ? Because I haven't this file :(
Thanks,
Dzmitry
On 8/19/13 12:26 PM, "Albe Laurenz" wrote:
>Dzmitry wrote:
>> On 8/19/13 11:36 AM, "Stéphane
Dzmitry wrote:
> On 8/19/13 11:36 AM, "Stéphane Schildknecht"
> wrote:
>> Le 19/08/2013 10:07, Dzmitry a écrit :
>>> I have postgres server running on ubuntu 12,Intel Xeon 8 CPUs 29 GB RAM.
>>> With following settings:
>>> max_connections = 550
>>> shared_buffers = 12GB
>>> temp_buffers = 8MB
>>
I don't think it's the case. I am using newrelic for monitoring my DB
servers(I have one master and 2 slaves - all use the same configuration) -
memory is not going above 12.5GB, so I have a good reserve, also I don't
see any swapping there :(
Thanks,
Dzmitry
On 8/19/13 11:36 AM, "Stéphane
Le 19/08/2013 10:07, Dzmitry a écrit :
Hey folks,
I have postgres server running on ubuntu 12,Intel Xeon 8 CPUs 29 GB RAM.
With following settings:
max_connections = 550
shared_buffers = 12GB
temp_buffers = 8MB
max_prepared_transactions = 0
work_mem = 50MB
maintenance_work_mem = 1GB
fsync = on
w
Hey folks,
I have postgres server running on ubuntu 12, Intel Xeon 8 CPUs 29 GB RAM.
With following settings:
max_connections = 550
shared_buffers = 12GB
temp_buffers = 8MB
max_prepared_transactions = 0
work_mem = 50MB
maintenance_work_mem = 1GB
fsync = on
wal_buffers = 16MB
commit_delay = 50
c
Hi
Postgres crashes with -
PG "FATAL: could not reattach to shared memory (key=5432001, addr=0210):
Invalid argument.
The version is 8.2.4, the platform is win32
Does someone know the reason/workaround ?
Thanks,
Yuval Sofer
BMC Software
CTM&D Business Unit
DBA Team
972-52-4286-282
yuv
Hi All
After running for a couple of days, my
postgres server crashes.
When I do a /etc/init.d/postgresql status, I get
unused.
And when I then try and start it, it fails. Only a
reboot helps.
Here's my /var/log/postgres :
DEBUG: database system is
readyERROR: Relation "cleared" does n
"Barry" <[EMAIL PROTECTED]> writes:
> After running for a couple of days, my postgres server crashes.
Sounds to me like you are starting the postmaster under some resource
limit that it eventually exceeds. Check ulimit settings.
regards, tom lane
Garry Dolley wrote:
>
> > Hmm, all the crashes you show us below sound like resource exhaustion
> > issues. That's a nice way to say your feeding silly SQL to the server,
> > and it's _trying_ to do what you ask... How are you driving the server?
> > psql? some sort of CGI? We need more info to h
> Hmm, all the crashes you show us below sound like resource exhaustion
> issues. That's a nice way to say your feeding silly SQL to the server,
> and it's _trying_ to do what you ask... How are you driving the server?
> psql? some sort of CGI? We need more info to help you debug what's
> happeni
Hmm, all the crashes you show us below sound like resource exhaustion
issues. That's a nice way to say your feeding silly SQL to the server,
and it's _trying_ to do what you ask... How are you driving the server?
psql? some sort of CGI? We need more info to help you debug what's
happening.
In par
I need some advice on how to get Postgres to stop crashing. It crashes
consistantly under high server load. I don't want to sound bitchy, but
that's just what has been happening lately.
now i'll give you a spec on the system:
Red Hat Linux 5.2 (kernel 2.0.36)
400Mhz Intel machine with 384 RAM
L
28 matches
Mail list logo