Re: [GENERAL] clarification about standby promotion

2017-02-08 Thread Benoit Lobréau
hi,

Tank you for the confirmation !

For the second part, I understand your explanation but I fail to see how
checking what we have replayed against what we have received will confirm
we have received everything (unless we are in sync replication).

Have a good day !

Benoit.


Re: [GENERAL] Running out of memory the hard way ...

2017-02-08 Thread Bill Moran
On Wed, 08 Feb 2017 10:44:24 -0500
Tom Lane  wrote:

> Albe Laurenz  writes:
> > Bill Moran wrote:
> >> What I feel is the best way to mitigate the situation, is to have some
> >> setting that limits the maximum RAM any backend can consume.
> 
> > I'd delegate that problem to the operating system which, after all,
> > should know best of all how much memory a process uses.
> 
> I've had some success using ulimit in the past, although it does have
> the disadvantage that you have to impose the same limit on every PG
> process.  (You set it before starting the postmaster and it inherits
> to every child process.)  If memory serves, limiting with the -v switch
> works better than -d or -m on Linux; but I might be misremembering.
> Conceivably we could add code to let the ulimit be set per-process,
> if the use-case were strong enough.

Thanks, Tom. I'm not sure why I didn't think to use this. Although part
of the problem may be that most of the links that come up from a google
search on this topic don't seem to have this suggestion.

Hopefully having this in the list archives will make the search easier
for the next person who has this issue. Does anyone know if there are
any suggestions to this effect in the official documentation? If not,
I'll try to make some time to submit a patch.

> To implement a limit inside PG, we'd have to add expensive bookkeeping
> to the palloc/pfree mechanism, and even that would be no panacea because
> it would fail to account for memory allocated directly from malloc.
> Hence, you could be pretty certain that it would be wildly inaccurate
> for sessions using third-party code such as PostGIS or Python.  An
> OS-enforced limit definitely sounds better from here.

Unfortunate but understandable.

-- 
Bill Moran 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL on eMMC - Corrupt file system

2017-02-08 Thread Thomas Güttler
Am 08.02.2017 um 07:25 schrieb Thomas Güttler:
> Hi PostgreSQL experts,
> 
> ...


# Update

After following the hints from [this answer][1], I could sync via owncloud for 
hours, and no file system error occurs. This is no big surprise since now only 
very few io-operations happen on the eMMC. Here is what I did:

 - attach external tradition hard disk
 - put postgres and /var/log on external disk
 - disable swap
 - use ramfs for /tmp


But above questions still remains:
 
 Is running linux with postgres on eMMC a bad idea in general?



  [1]: 
http://%20http://raspberrypi.stackexchange.com/questions/169/how-can-i-extend-the-life-of-my-sd-card/186#186


BTW, I asked the same question here: 
http://askubuntu.com/questions/880947/linux-on-emmc-corrupt-file-system



-- 
http://www.thomas-guettler.de/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] clarification about standby promotion

2017-02-08 Thread Venkata B Nagothi
On Thu, Feb 9, 2017 at 4:53 AM, Benoit Lobréau 
wrote:

> Hi,
>
>
> I would like to clarify something about standby promotion. From the
> sentence below. I understand that, during the promotion process, postgres
> will replay all the available wals (from the archive or pg_xlog).
>

Yes, that is correct.


> https://www.postgresql.org/docs/9.5/static/warm-standby.
> html#STREAMING-REPLICATION
>
> 25.2.2. Standby Server Operation
> ...
> Standby mode is exited and the server switches to normal operation when
> pg_ctl promote is run or a trigger file is found (trigger_file). Before
> failover, any WAL immediately available in the archive or in pg_xlog will
> be restored, but no attempt is made to connect to the master.
>
> I have seen several articles like this one (https://www.enterprisedb.com/
> switchoverswitchback-postgresql-93) where they say that
> pg_last_xlog_receive_location() and pg_last_xlog_replay_location() should
> be checked before promotion. I don't understand why since they will be
> replayed anyway. Did something changed since 9.3 about this ?
>

The above link is about improvements related to switch-over/switch-back
process from the version 9.3. What you are asking is about standby
promotion process. When the standby is promoted, as mentioned in the docs,
the standby server attempts to apply the available WAL during the promotion
process and will not attempt to connect to master. Which means, you would
not know if there are any pending WALs yet to be streamed from master or in
other words, standby may not know if the master is a-head. It is important
that you know the standby current position by executing the above mentioned
*xlog* functions. Hope i answered your question.

Regards,

Venkata B N
Database Consultant


[GENERAL] LDAP configuration

2017-02-08 Thread PAWAN SHARMA
Hi All,


I want to use pgsync method for LDAP configuration in current environment.

So please help me and share the document or link for configuration.


-Pawan


[GENERAL] clarification about standby promotion

2017-02-08 Thread Benoit Lobréau
Hi,


I would like to clarify something about standby promotion. From the
sentence below. I understand that, during the promotion process, postgres
will replay all the available wals (from the archive or pg_xlog).


https://www.postgresql.org/docs/9.5/static/warm-standby.html#STREAMING-REPLICATION

25.2.2. Standby Server Operation
...
Standby mode is exited and the server switches to normal operation when
pg_ctl promote is run or a trigger file is found (trigger_file). Before
failover, any WAL immediately available in the archive or in pg_xlog will
be restored, but no attempt is made to connect to the master.

I have seen several articles like this one (
https://www.enterprisedb.com/switchoverswitchback-postgresql-93) where they
say that pg_last_xlog_receive_location() and pg_last_xlog_replay_location()
should be checked before promotion. I don't understand why since they will
be replayed anyway. Did something changed since 9.3 about this ?

Thanks for you help. please excuse my poor english.

Benoit


Re: [GENERAL] Running out of memory the hard way ...

2017-02-08 Thread Paul Ramsey
On Wed, Feb 8, 2017 at 7:44 AM, Tom Lane  wrote:

> Albe Laurenz  writes:
> > Bill Moran wrote:
> >> What I feel is the best way to mitigate the situation, is to have some
> >> setting that limits the maximum RAM any backend can consume.
>
> > I'd delegate that problem to the operating system which, after all,
> > should know best of all how much memory a process uses.
>
> I've had some success using ulimit in the past, although it does have
> the disadvantage that you have to impose the same limit on every PG
> process.  (You set it before starting the postmaster and it inherits
> to every child process.)  If memory serves, limiting with the -v switch
> works better than -d or -m on Linux; but I might be misremembering.
> Conceivably we could add code to let the ulimit be set per-process,
> if the use-case were strong enough.
>
> To implement a limit inside PG, we'd have to add expensive bookkeeping
> to the palloc/pfree mechanism, and even that would be no panacea because
> it would fail to account for memory allocated directly from malloc.
> Hence, you could be pretty certain that it would be wildly inaccurate
> for sessions using third-party code such as PostGIS or Python.  An
> OS-enforced limit definitely sounds better from here.
>
>
Confirming what Tom said, with respect to the specific example in this
thread, a large proportion of the allocations in memory hungry bits of
PostGIS are in fact using bare malloc via the GEOS library.

P


Re: [GENERAL] Running out of memory the hard way ...

2017-02-08 Thread Tom Lane
Albe Laurenz  writes:
> Bill Moran wrote:
>> What I feel is the best way to mitigate the situation, is to have some
>> setting that limits the maximum RAM any backend can consume.

> I'd delegate that problem to the operating system which, after all,
> should know best of all how much memory a process uses.

I've had some success using ulimit in the past, although it does have
the disadvantage that you have to impose the same limit on every PG
process.  (You set it before starting the postmaster and it inherits
to every child process.)  If memory serves, limiting with the -v switch
works better than -d or -m on Linux; but I might be misremembering.
Conceivably we could add code to let the ulimit be set per-process,
if the use-case were strong enough.

To implement a limit inside PG, we'd have to add expensive bookkeeping
to the palloc/pfree mechanism, and even that would be no panacea because
it would fail to account for memory allocated directly from malloc.
Hence, you could be pretty certain that it would be wildly inaccurate
for sessions using third-party code such as PostGIS or Python.  An
OS-enforced limit definitely sounds better from here.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Causeless CPU load waves in backend, on windows, 9.5.5 (EDB binary).

2017-02-08 Thread Nikolai Zhubr

05.02.2017 22:05, I wrote:
[...]

And yes, running Process Explorer gave some new and unexpected input.
During the period of this strange high load it claims 40% CPU is used by
interrupts (normally 0.01%) and 3% used by backend postgres.exe
(normally approx 0%). I'd guess this means some problem happening in the
OS (which hosts this postgres.exe), probably related to network
communication? (Because nothing else seems likely related to interrupts
in such scenario?)


Ok, I've got a working example as simple as the following:

do {
for (i=0; i<40; i++) {
Sleep(15);
res = PQexec(conn, "SELECT localtimestamp ");
if (PQresultStatus(res) != PGRES_TUPLES_OK) return 1;
PQclear(res);
}
fprintf(stdout, "*");
Sleep(350);
} while(1);

Here, Sleep(15) and Sleep(350) are very important, removing or changing 
them can cause the effect to disappear.
Looks like some timing-sensitive issue in windows TCP/IP implementation? 
But then, how can it be that no-one have noticed it yet? Puzzling.



Thank you.

Regards,
Nikolai



Additionally, I've now got a kernrate viewer reports, one for high load,
and one for normal load period, attached below. Here, tcpip shows some
more kernel activity during the problematic period, but the difference
is not so huge.

Time 36348 hits, 25000 events per hit 
Module Hits msec %Total Events/Sec
intelppm 35048 45640 96 % 19198071
hal 1030 45640 2 % 564198
ntkrnlpa 204 45640 0 % 111744
tcpip 20 45640 0 % 10955
win32k 18 45640 0 % 9859
afd 6 45640 0 % 3286
ipnat 6 45640 0 % 3286
NDIS 4 45640 0 % 2191
..

Time 37227 hits, 25000 events per hit 
Module Hits msec %Total Events/Sec
intelppm 35856 46828 96 % 19142393
hal 1089 46828 2 % 581382
ntkrnlpa 229 46828 0 % 122255
win32k 29 46828 0 % 15482
tcpip 9 46828 0 % 4804
NDIS 4 46828 0 % 2135
afd 3 46828 0 % 1601
psched 3 46828 0 % 1601
ipnat 2 46828 0 % 1067


Thank you.

Nikolai



The screenshot: https://yadi.sk/i/hC8FMZCE3CyzPs


Thank you.

Nikolai




Andres











--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Running out of memory the hard way ...

2017-02-08 Thread Albe Laurenz
Bill Moran wrote:
> If you run a transaction with lots of server side functions that use a
> lot of memory, this can trigger the OOM killer in Linux, causing the
> PosgreSQL backend to receive a SIGKILL and all the associated bad
> stuff.
> 
> Tuning the OOM killer is not sufficient. No setting I've found for the
> OOM killer will guarantee that it won't SIGKILL a process that's essentially
> untenable anyway (because it's going to use more memory than actually
> exists on the system at some point anyway). Additionally, "add more RAM"
> doesn't "solve" the problem, it only delays it until datasets
> scale up to even larger transactions that use even more memory.

I routinely set vm.overcommit_memory = 2 and configure vm.overcommit_ratio
so that the kernel does not try to commit more memory than there is in
the machine.

That should get rid of the problem, of course at the risk of leaving
some memory unused.

> This is particularly prevelent with Postgis, because some Postgis functions
> are very memory intesive, but I'd be willing to bet real money that I could
> trigger it with just about any stored procedure that allocates memory in
> such as way that it doesn't get reclaimed until the transaction completes.
[...]
> 
> What I feel is the best way to mitigate the situation, is to have some
> setting that limits the maximum RAM any backend can consume. Attempting to
> exceed this limit would cause an error and rollback for that particular
> backend without affecting other backends.
[...]
>
> My first question: does this setting exist somewhere and I'm simply not
> finding it for some reason?
> 
> Assuming this doesn't exist (I haven't found it) my next question is
> whether there's a philosophical or technical reason that such a feature
> doesn't exist? Should I take this discussion to -hackers?

I don't think that there is such a setting.

work_mem sets a limit per operation, but that is a soft limit that
PostgreSQL server code can choose to ignore if it pleases.
Moreover, it does not limit the *total* memory a backend can use.

I'd delegate that problem to the operating system which, after all,
should know best of all how much memory a process uses.
And I don't see a big advantage in a PostgreSQL generated error message
over an "out of memory" error that is propagated from the operating system.

Of course, if there is no way to limit the amount of memory per process
(excluding shared memory!), you have a point.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Running out of memory the hard way ...

2017-02-08 Thread Bill Moran

If you run a transaction with lots of server side functions that use a
lot of memory, this can trigger the OOM killer in Linux, causing the
PosgreSQL backend to receive a SIGKILL and all the associated bad 
stuff.

Tuning the OOM killer is not sufficient. No setting I've found for the
OOM killer will guarantee that it won't SIGKILL a process that's essentially
untenable anyway (because it's going to use more memory than actually
exists on the system at some point anyway). Additionally, "add more RAM"
doesn't "solve" the problem, it only delays it until datasets
scale up to even larger transactions that use even more memory.

This is particularly prevelent with Postgis, because some Postgis functions
are very memory intesive, but I'd be willing to bet real money that I could
trigger it with just about any stored procedure that allocates memory in
such as way that it doesn't get reclaimed until the transaction completes.
See as an example: https://trac.osgeo.org/postgis/ticket/3445
If anyone wants to investigate this but is having trouble reproducing, I
can construct specific failure scenarios fairly easily.

Another workaround is to run the offending statements in smaller
transactional batches. This is the best solution I've found so far, but
it's not quite ideal. In particular it requires the client program to
reimplement transaction guarantees on the client side. Sometimes this
isn't necessary, but other times it is.

What I feel is the best way to mitigate the situation, is to have some
setting that limits the maximum RAM any backend can consume. Attempting to
exceed this limit would cause an error and rollback for that particular
backend without affecting other backends. This would provide information
to the client process that can be rationally interpreted by client code
to result in either an error that a developer can understand, or possibly
adaptive code that changes behavior to accomodate limits on the server
side.

My first question: does this setting exist somewhere and I'm simply not
finding it for some reason?

Assuming this doesn't exist (I haven't found it) my next question is
whether there's a philosophical or technical reason that such a feature
doesn't exist? Should I take this discussion to -hackers?

-- 
Bill Moran 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general