Re: [GENERAL] psql 9.3 automatic recovery in progress

2016-10-10 Thread Pavel Stehule
2016-10-10 21:43 GMT+02:00 Periko Support :

> For the life time in odoo session, can u point me where I can manage that
> setting?
>
> The configuration /etc/openerp-server.conf doesn't have any parameter for
> that.
>
> That must be in a odoo file...?
>

https://www.odoo.com/forum/help-1/question/reduce-memory-usage-54636
http://www.vionblog.com/openerp-server-conf-for-openerp-7-explained/

Regards

Pavel



> Thanks.
>
> On Mon, Oct 10, 2016 at 12:25 PM, Pavel Stehule 
> wrote:
>
>>
>>
>> 2016-10-10 21:12 GMT+02:00 Periko Support :
>>
>>> Andreo u got a good observation here.
>>>
>>> I got a script that run every hour why?
>>>
>>> Odoo got some issues with IDLE connections, if we don't check our
>>> current psql connections after a while the system eat all connections and a
>>> lot of them are IDLE and stop answering users, we create a script that runs
>>> every hour, this is:
>>>
>>> """ Script is used to kill database connection which are idle from last
>>> 15 minutes """
>>> #!/usr/bin/env python
>>> import psycopg2
>>> import sys
>>> import os
>>> from os.path import join, expanduser
>>> import subprocess, signal, psutil
>>> import time
>>>
>>> def get_conn():
>>>
>>> conn_string = "host='localhost' dbname='template1' user='openerp'
>>> password='s$p_p@r70'"
>>>
>>> try:
>>> # get a connection, if a connect cannot be made an exception
>>> will be raised here
>>> conn = psycopg2.connect(conn_string)
>>> cursor = conn.cursor()
>>> #print "successful Connection"
>>> return cursor
>>> except:
>>> exceptionType, exceptionValue, exceptionTraceback =
>>> sys.exc_info()
>>> sys.exit("Database connection failed!\n ->%s" % (exceptionValue))
>>>
>>>
>>> def get_pid():
>>>
>>> SQL="select pid, datname, usename from pg_stat_activity where
>>> usename = 'openerp' AND query_start < current_timestamp - INTERVAL '15'
>>> MINUTE;"
>>> cursor = get_conn()
>>> cursor.execute(SQL)
>>> idle_record = cursor.fetchall()
>>> print "---
>>> "
>>> print "Date:",time.strftime("%d/%m/%Y")
>>> print "idle record list: ", idle_record
>>> print "---
>>> "
>>> for pid in idle_record:
>>> try:
>>> #print "process details",pid
>>> #os.system("kill -9 %s" % (int(pid[0]), ))
>>> os.kill(int(pid[0]), signal.SIGKILL)
>>> except OSError as ex:
>>> continue
>>>
>>> get_pid()
>>>
>>> I will move this to run not every hour and see the reaction.
>>>
>>> Is a easy move, about Tim, our current KVM server is good for me, see
>>> picture please:
>>>
>>>
>>> free
>>>  total   used   free sharedbuffers cached
>>> Mem: 181764228  136200312   45563916468  69904 734652
>>> -/+ buffers/cache:  135395756   46368472
>>> Swap:   261948  0 261948
>>>
>>> I got other vm but they are on other raid setup.
>>>
>>> Tim u mention that u recommend reduce memory pressure, u mean to lower
>>> down my values like shared_buffers or increase memory?
>>>
>>
>> try to decrease lifetime of odoo sessions - then memory will be returned
>> back to system - set limit_memory_soft less in odoo config - I found
>> some manuals on net with wrong settings on net.
>>
>> the odoo sessions should be refreshed more often.
>>
>> Regards
>>
>> Pavel
>>
>>
>>
>>>
>>>
>>> Melvin I try that value before but my server cry, I will add more memory
>>> in a few weeks.
>>>
>>> Any comment I will appreciated, thanks.
>>>
>>> On Mon, Oct 10, 2016 at 11:22 AM, Tom Lane  wrote:
>>>
 Periko Support  writes:
 > My current server has 82GB memory.

 You said this was running inside a VM, though --- maybe the VM is
 resource-constrained?

 In any case, turning off memory overcommit would be a good idea if
 you're not concerned about running anything but Postgres.

 regards, tom lane

>>>
>>>
>>
>


Re: [SPAM] [GENERAL] psql 9.3 automatic recovery in progress

2016-10-10 Thread Periko Support
Scott your script is very clean, I'm testing, thanks.

On Mon, Oct 10, 2016 at 3:28 PM, Scott Mead  wrote:
>
>
> On Mon, Oct 10, 2016 at 6:15 PM, Tom Lane  wrote:
>>
>> Adrian Klaver  writes:
>> > On 10/10/2016 12:18 PM, Periko Support wrote:
>> >> I was on vacation, but the issue have the same behavior:
>>
>> > Actually no. Before you had:
>>
>> > 2016-09-12 09:00:01 PDT LOG:  server process (PID 23958) was
>> > terminated by signal 9: Killed
>>
>> > Now you have:
>>
>> > 2016-10-10 07:50:09 PDT WARNING:  terminating connection because of
>> > crash of another server process
>>
>> Most likely it *is* the same thing but the OP trimmed the second log
>> excerpt too much.  The "crash of another server process" complaints
>> suggest strongly that there was already another problem and this
>> is just part of the postmaster's kill-all-children-and-restart
>> recovery procedure.
>>
>> Now, if there really is nothing before this in the log, another possible
>> theory is that something decided to send the child processes a SIGQUIT
>> signal, which would cause them to believe that the postmaster had told
>> them to commit hara-kiri.  I only bring this up because we were already
>> shown a script sending random SIGKILLs ... so random SIGQUITs wouldn't be
>> too hard to credit either.  But the subsequent log entries don't quite
>> square with that idea; if the postmaster weren't already expecting the
>> children to die, it would have reacted differently.
>
>
>
> The better solution is to do this in one query and more safely:
>
> select pid, usename, datname, pg_terminate_backend(pid)
>FROM pg_stat_activity
>   WHERE usename = 'openerp'
>AND  now() - query_start > '15 minutes'::interval;
>
>This will use the builtin 'pg_terminate_backend for you in one shot.
>
>  --Scott
>
>
>
>>
>>
>> regards, tom lane
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>
>
>
>
> --
> --
> Scott Mead
> Sr. Architect
> OpenSCG
> http://openscg.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] LOG: munmap(0x7fff80000000) failed: Invalid argument

2016-10-10 Thread Tom Lane
Andres Freund  writes:
> On 2016-10-10 18:21:48 -0400, Tom Lane wrote:
>> Chris Richards  writes:
>>> LOG:  munmap(0x7fff8000) failed: Invalid argument

>> [ digs in code... ]  One theory is that PGSharedMemoryDetach is getting
>> called more than once, but I'm not sure how that would happen.  Can you
>> characterize where this happens more precisely?  What nondefault settings
>> have you got in postgresql.conf?

> Hm. Could that be from the DSM code?

That particular error message spelling only appears in sysv_shmem.c,
so it's not directly DSM's fault.

The comments around PGSharedMemoryDetach strongly suggest that it ought
to be a no-op if called twice, which it originally was but fails to be
since the addition of the munmap call.   So I'm *very* strongly tempted
to add "AnonymousShmem = NULL;" there and in IpcMemoryDetach.  But
it's not evident why we've not seen this behavior many times already,
so I'd kind of like to find out what's different about Chris's use-case
before assuming that that will fix it.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] LOG: munmap(0x7fff80000000) failed: Invalid argument

2016-10-10 Thread Andres Freund
On 2016-10-10 18:21:48 -0400, Tom Lane wrote:
> Chris Richards  writes:
> > Setting up postgresql-9.5 (9.5.4-1.pgdg14.04+2) ...
> > Creating new cluster 9.5/main ...
> >   config /etc/postgresql/9.5/main
> >   data   /var/lib/postgresql/9.5/main
> >   locale en_US.UTF-8
> > LOG:  munmap(0x7fff8000) failed: Invalid argument
> > [... snip 14 or so repeats ...]
> > LOG:  munmap(0x7fff8000) failed: Invalid argument
> >   socket /var/run/postgresql
> >   port   5433
> > update-alternatives: using
> > /usr/share/postgresql/9.5/man/man1/postmaster.1.gz to provide
> > /usr/share/man/man1/postmaster.1.gz (postmaster.1.gz) in auto mode
> >  * Starting PostgreSQL 9.5 database server
> >   [ OK ]
> > Processing triggers for libc-bin (2.19-0ubuntu6.6) ...
> 
> > I'm able to connect and I dumped a few default relations.
> 
> > Is the munmap error of concern? It remains upon rebooting / restarting the
> > server.
> 
> Seems pretty fishy to me; I don't know what would be causing it.
> 
> [ digs in code... ]  One theory is that PGSharedMemoryDetach is getting
> called more than once, but I'm not sure how that would happen.  Can you
> characterize where this happens more precisely?  What nondefault settings
> have you got in postgresql.conf?

Hm. Could that be from the DSM code?

- Andres


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [SPAM] [GENERAL] psql 9.3 automatic recovery in progress

2016-10-10 Thread Scott Mead
On Mon, Oct 10, 2016 at 6:15 PM, Tom Lane  wrote:

> Adrian Klaver  writes:
> > On 10/10/2016 12:18 PM, Periko Support wrote:
> >> I was on vacation, but the issue have the same behavior:
>
> > Actually no. Before you had:
>
> > 2016-09-12 09:00:01 PDT LOG:  server process (PID 23958) was
> > terminated by signal 9: Killed
>
> > Now you have:
>
> > 2016-10-10 07:50:09 PDT WARNING:  terminating connection because of
> > crash of another server process
>
> Most likely it *is* the same thing but the OP trimmed the second log
> excerpt too much.  The "crash of another server process" complaints
> suggest strongly that there was already another problem and this
> is just part of the postmaster's kill-all-children-and-restart
> recovery procedure.
>
> Now, if there really is nothing before this in the log, another possible
> theory is that something decided to send the child processes a SIGQUIT
> signal, which would cause them to believe that the postmaster had told
> them to commit hara-kiri.  I only bring this up because we were already
> shown a script sending random SIGKILLs ... so random SIGQUITs wouldn't be
> too hard to credit either.  But the subsequent log entries don't quite
> square with that idea; if the postmaster weren't already expecting the
> children to die, it would have reacted differently.
>


The better solution is to do this in one query and more safely:

select pid, usename, datname, pg_terminate_backend(pid)
   FROM pg_stat_activity
  WHERE usename = 'openerp'
   AND  now() - query_start > '15 minutes'::interval;

   This will use the builtin 'pg_terminate_backend for you in one shot.

 --Scott




>
> regards, tom lane
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG *
http://openscg.com


Re: [GENERAL] LOG: munmap(0x7fff80000000) failed: Invalid argument

2016-10-10 Thread Tom Lane
Chris Richards  writes:
> Setting up postgresql-9.5 (9.5.4-1.pgdg14.04+2) ...
> Creating new cluster 9.5/main ...
>   config /etc/postgresql/9.5/main
>   data   /var/lib/postgresql/9.5/main
>   locale en_US.UTF-8
> LOG:  munmap(0x7fff8000) failed: Invalid argument
> [... snip 14 or so repeats ...]
> LOG:  munmap(0x7fff8000) failed: Invalid argument
>   socket /var/run/postgresql
>   port   5433
> update-alternatives: using
> /usr/share/postgresql/9.5/man/man1/postmaster.1.gz to provide
> /usr/share/man/man1/postmaster.1.gz (postmaster.1.gz) in auto mode
>  * Starting PostgreSQL 9.5 database server
>   [ OK ]
> Processing triggers for libc-bin (2.19-0ubuntu6.6) ...

> I'm able to connect and I dumped a few default relations.

> Is the munmap error of concern? It remains upon rebooting / restarting the
> server.

Seems pretty fishy to me; I don't know what would be causing it.

[ digs in code... ]  One theory is that PGSharedMemoryDetach is getting
called more than once, but I'm not sure how that would happen.  Can you
characterize where this happens more precisely?  What nondefault settings
have you got in postgresql.conf?

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [SPAM] [GENERAL] psql 9.3 automatic recovery in progress

2016-10-10 Thread Tom Lane
Adrian Klaver  writes:
> On 10/10/2016 12:18 PM, Periko Support wrote:
>> I was on vacation, but the issue have the same behavior:

> Actually no. Before you had:

> 2016-09-12 09:00:01 PDT LOG:  server process (PID 23958) was
> terminated by signal 9: Killed

> Now you have:

> 2016-10-10 07:50:09 PDT WARNING:  terminating connection because of
> crash of another server process

Most likely it *is* the same thing but the OP trimmed the second log
excerpt too much.  The "crash of another server process" complaints
suggest strongly that there was already another problem and this
is just part of the postmaster's kill-all-children-and-restart
recovery procedure.

Now, if there really is nothing before this in the log, another possible
theory is that something decided to send the child processes a SIGQUIT
signal, which would cause them to believe that the postmaster had told
them to commit hara-kiri.  I only bring this up because we were already
shown a script sending random SIGKILLs ... so random SIGQUITs wouldn't be
too hard to credit either.  But the subsequent log entries don't quite
square with that idea; if the postmaster weren't already expecting the
children to die, it would have reacted differently.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [SPAM] [GENERAL] psql 9.3 automatic recovery in progress

2016-10-10 Thread Periko Support
Adrian

2016-10-10 12:00:01 PDT LOG:  connection authorized: user=openerp
database=template1
2016-10-10 12:00:01 PDT LOG:  server process (PID 30394) was
terminated by signal 9: Killed
2016-10-10 12:00:01 PDT DETAIL:  Failed process was running: SELECT
"name", "model", "description", "month" FROM "etiquetas_temp"
2016-10-10 12:00:01 PDT LOG:  terminating any other active server processes

I will  do some changes to my server and see if I can fix the issue.

Thanks for your comment and all of u for your great help.


On Mon, Oct 10, 2016 at 2:03 PM, Adrian Klaver
 wrote:
> On 10/10/2016 12:18 PM, Periko Support wrote:
>>
>> I was on vacation, but the issue have the same behavior:
>
>
> Actually no. Before you had:
>
> 2016-09-12 09:00:01 PDT LOG:  server process (PID 23958) was
> terminated by signal 9: Killed
>
> Now you have:
>
> 2016-10-10 07:50:09 PDT WARNING:  terminating connection because of
> crash of another server process
> 2016-10-10 07:50:09 PDT DETAIL:  The postmaster has commanded this
> server process to roll back the current transaction and exit, because
> another server process exited abnormally and possibly corrupted shared
> memory.
>
> Which corresponds to this from your subsequent post:
>
> #os.system("kill -9 %s" % (int(pid[0]), ))
> os.kill(int(pid[0]), signal.SIGKILL)
>
>>
>> 2016-10-10 07:50:09 PDT WARNING:  terminating connection because of
>> crash of another server process
>> 2016-10-10 07:50:09 PDT DETAIL:  The postmaster has commanded this
>> server process to roll back the current transaction and exit, because
>> another server process exited abnormally and possibly corrupted shared
>> memory.
>> 2016-10-10 07:50:09 PDT HINT:  In a moment you should be able to
>> reconnect to the database and repeat your command.
>> 2016-10-10 07:50:09 PDT WARNING:  terminating connection because of
>> crash of another server process
>> 2016-10-10 07:50:09 PDT DETAIL:  The postmaster has commanded this
>> server process to roll back the current transaction and exit, because
>> another server process exited abnormally and possibly corrupted shared
>> memory.
>> 2016-10-10 07:50:09 PDT HINT:  In a moment you should be able to
>> reconnect to the database and repeat your command.
>> 2016-10-10 07:50:09 PDT LOG:  archiver process (PID 13066) exited with
>> exit code 1
>> 2016-10-10 07:50:10 PDT LOG:  all server processes terminated;
>> reinitializing
>> 2016-10-10 07:50:10 PDT LOG:  connection received: host=192.168.2.6
>> port=37700
>> 2016-10-10 07:50:10 PDT LOG:  database system was interrupted; last
>> known up at 2016-10-10 07:49:15 PDT
>> 2016-10-10 07:50:10 PDT FATAL:  the database system is in recovery mode
>> 2016-10-10 07:50:10 PDT LOG:  connection received: host=192.168.2.6
>> port=37702
>> 2016-10-10 07:50:10 PDT FATAL:  the database system is in recovery mode
>> 2016-10-10 07:50:15 PDT LOG:  database system was not properly shut
>> down; automatic recovery in progress
>> 2016-10-10 07:50:15 PDT LOG:  redo starts at 517/C928
>> 2016-10-10 07:50:15 PDT LOG:  unexpected pageaddr 517/7700 in log
>> segment 0001051700D2, offset 0
>> 2016-10-10 07:50:15 PDT LOG:  redo done at 517/D1C8
>> 2016-10-10 07:50:15 PDT LOG:  last completed transaction was at log
>> time 2016-10-10 07:49:09.891669-07
>> 2016-10-10 07:50:15 PDT LOG:  connection received: host=192.168.2.6
>> port=37704
>> 2016-10-10 07:50:15 PDT FATAL:  the database system is in recovery mode
>> 2016-10-10 07:50:15 PDT LOG:  connection received: host=192.168.2.6
>> port=37706
>> 2016-10-10 07:50:15 PDT FATAL:  the database system is in recovery mode
>> 2016-10-10 07:50:16 PDT LOG:  MultiXact member wraparound protections
>> are now enabled
>> 2016-10-10 07:50:16 PDT LOG:  database system is ready to accept
>> connections
>> 2016-10-10 07:50:16 PDT LOG:  autovacuum launcher started
>>
>> 2016-10-10 09:00:01 PDT LOG:  archiver process (PID 14004) exited with
>> exit code 1
>> 2016-10-10 09:00:01 PDT WARNING:  terminating connection because of
>> crash of another server process
>> 2016-10-10 09:00:01 PDT DETAIL:  The postmaster has commanded this
>> server process to roll back the current transaction and exit, because
>> another server process exited abnormally and possibly corrupted shared
>> memory.
>> 2016-10-10 09:00:01 PDT HINT:  In a moment you should be able to
>> reconnect to the database and repeat your command.
>> 2016-10-10 09:00:01 PDT WARNING:  terminating connection because of
>> crash of another server process
>> 2016-10-10 09:00:01 PDT DETAIL:  The postmaster has commanded this
>> server process to roll back the current transaction and exit, because
>> another server process exited abnormally and possibly corrupted shared
>> memory.
>> 2016-10-10 09:00:01 PDT HINT:  In a moment you should be able to
>> reconnect to the database and repeat your command.
>> 2016-10-10 09:00:01 PDT LOG:  all server processes terminated;
>> reinitializing
>> 

Re: [GENERAL] psql 9.3 automatic recovery in progress

2016-10-10 Thread Alban Hertroys

> On 10 Oct 2016, at 21:43, Periko Support  wrote:
> 
> For the life time in odoo session, can u point me where I can manage that 
> setting?
> 
> The configuration /etc/openerp-server.conf doesn't have any parameter for 
> that.
> 
> That must be in a odoo file...?
> 
> Thanks.
> 
> On Mon, Oct 10, 2016 at 12:25 PM, Pavel Stehule  
> wrote:
> 
> 
> 2016-10-10 21:12 GMT+02:00 Periko Support :
> Andreo u got a good observation here.
> 
> I got a script that run every hour why?
> 
> Odoo got some issues with IDLE connections, if we don't check our current 
> psql connections after a while the system eat all connections and a lot of 
> them are IDLE and stop answering users, we create a script that runs every 
> hour, this is:

That's all part of Odoo (formerly known as OpenERP), isn't it? Did you contact 
them about this behaviour yet? Might just be that they're familiar with the 
problem and have a solution for it.

I suspect the Python script you're running was implemented as a rather rough 
workaround by people from allianzgrp who knew just enough to be harmful. (Kill 
-9 on a database process, jeez! Keyboards should have an electroshock feature 
for people like that…)

Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] LOG: munmap(0x7fff80000000) failed: Invalid argument

2016-10-10 Thread Chris Richards
Howdy.

I have an Ubuntu 14.04 LTS, configured with huge pages (4x1GB, disabled
transparent) and a later kernel than what ships with 14.04.

root@ff2:~# uname -a
Linux ff2 3.16.7-ckt11-061515+ #1 SMP Mon Jun 15 18:47:13 CDT 2015 x86_64
x86_64 x86_64 GNU/Linux

It had postgresql-9.3 on it and I installed postgresql-9.5 via these steps:

aptitude remove -y postgresql-9.3
echo "deb http://apt.postgresql.org/pub/repos/apt/ trusty-pgdg main 9.5" >
/etc/apt/sources.list.d/postgresql.list
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 7FCC7D46ACCC4CF8
apt-get update
apt-get install -y postgresql-client-9.5 postgresql-common postgresql-9.5

At the end of the install, these were emitted to the screen:

Setting up postgresql-9.5 (9.5.4-1.pgdg14.04+2) ...
Creating new cluster 9.5/main ...
  config /etc/postgresql/9.5/main
  data   /var/lib/postgresql/9.5/main
  locale en_US.UTF-8
LOG:  munmap(0x7fff8000) failed: Invalid argument
[... snip 14 or so repeats ...]
LOG:  munmap(0x7fff8000) failed: Invalid argument
  socket /var/run/postgresql
  port   5433
update-alternatives: using
/usr/share/postgresql/9.5/man/man1/postmaster.1.gz to provide
/usr/share/man/man1/postmaster.1.gz (postmaster.1.gz) in auto mode
 * Starting PostgreSQL 9.5 database server
  [ OK ]
Processing triggers for libc-bin (2.19-0ubuntu6.6) ...

I'm able to connect and I dumped a few default relations.

Is the munmap error of concern? It remains upon rebooting / restarting the
server.

Thanks,
Chris


Re: [SPAM] [GENERAL] psql 9.3 automatic recovery in progress

2016-10-10 Thread Adrian Klaver

On 10/10/2016 12:18 PM, Periko Support wrote:

I was on vacation, but the issue have the same behavior:


Actually no. Before you had:

2016-09-12 09:00:01 PDT LOG:  server process (PID 23958) was
terminated by signal 9: Killed

Now you have:

2016-10-10 07:50:09 PDT WARNING:  terminating connection because of
crash of another server process
2016-10-10 07:50:09 PDT DETAIL:  The postmaster has commanded this
server process to roll back the current transaction and exit, because
another server process exited abnormally and possibly corrupted shared
memory.

Which corresponds to this from your subsequent post:

#os.system("kill -9 %s" % (int(pid[0]), ))
os.kill(int(pid[0]), signal.SIGKILL)



2016-10-10 07:50:09 PDT WARNING:  terminating connection because of
crash of another server process
2016-10-10 07:50:09 PDT DETAIL:  The postmaster has commanded this
server process to roll back the current transaction and exit, because
another server process exited abnormally and possibly corrupted shared
memory.
2016-10-10 07:50:09 PDT HINT:  In a moment you should be able to
reconnect to the database and repeat your command.
2016-10-10 07:50:09 PDT WARNING:  terminating connection because of
crash of another server process
2016-10-10 07:50:09 PDT DETAIL:  The postmaster has commanded this
server process to roll back the current transaction and exit, because
another server process exited abnormally and possibly corrupted shared
memory.
2016-10-10 07:50:09 PDT HINT:  In a moment you should be able to
reconnect to the database and repeat your command.
2016-10-10 07:50:09 PDT LOG:  archiver process (PID 13066) exited with
exit code 1
2016-10-10 07:50:10 PDT LOG:  all server processes terminated; reinitializing
2016-10-10 07:50:10 PDT LOG:  connection received: host=192.168.2.6 port=37700
2016-10-10 07:50:10 PDT LOG:  database system was interrupted; last
known up at 2016-10-10 07:49:15 PDT
2016-10-10 07:50:10 PDT FATAL:  the database system is in recovery mode
2016-10-10 07:50:10 PDT LOG:  connection received: host=192.168.2.6 port=37702
2016-10-10 07:50:10 PDT FATAL:  the database system is in recovery mode
2016-10-10 07:50:15 PDT LOG:  database system was not properly shut
down; automatic recovery in progress
2016-10-10 07:50:15 PDT LOG:  redo starts at 517/C928
2016-10-10 07:50:15 PDT LOG:  unexpected pageaddr 517/7700 in log
segment 0001051700D2, offset 0
2016-10-10 07:50:15 PDT LOG:  redo done at 517/D1C8
2016-10-10 07:50:15 PDT LOG:  last completed transaction was at log
time 2016-10-10 07:49:09.891669-07
2016-10-10 07:50:15 PDT LOG:  connection received: host=192.168.2.6 port=37704
2016-10-10 07:50:15 PDT FATAL:  the database system is in recovery mode
2016-10-10 07:50:15 PDT LOG:  connection received: host=192.168.2.6 port=37706
2016-10-10 07:50:15 PDT FATAL:  the database system is in recovery mode
2016-10-10 07:50:16 PDT LOG:  MultiXact member wraparound protections
are now enabled
2016-10-10 07:50:16 PDT LOG:  database system is ready to accept connections
2016-10-10 07:50:16 PDT LOG:  autovacuum launcher started

2016-10-10 09:00:01 PDT LOG:  archiver process (PID 14004) exited with
exit code 1
2016-10-10 09:00:01 PDT WARNING:  terminating connection because of
crash of another server process
2016-10-10 09:00:01 PDT DETAIL:  The postmaster has commanded this
server process to roll back the current transaction and exit, because
another server process exited abnormally and possibly corrupted shared
memory.
2016-10-10 09:00:01 PDT HINT:  In a moment you should be able to
reconnect to the database and repeat your command.
2016-10-10 09:00:01 PDT WARNING:  terminating connection because of
crash of another server process
2016-10-10 09:00:01 PDT DETAIL:  The postmaster has commanded this
server process to roll back the current transaction and exit, because
another server process exited abnormally and possibly corrupted shared
memory.
2016-10-10 09:00:01 PDT HINT:  In a moment you should be able to
reconnect to the database and repeat your command.
2016-10-10 09:00:01 PDT LOG:  all server processes terminated; reinitializing
2016-10-10 09:00:02 PDT LOG:  database system was interrupted; last
known up at 2016-10-10 08:59:33 PDT
2016-10-10 09:00:03 PDT LOG:  connection received: host=127.0.0.1 port=35950
2016-10-10 09:00:03 PDT FATAL:  the database system is in recovery mode
2016-10-10 09:00:03 PDT LOG:  connection received: host=127.0.0.1 port=35951
2016-10-10 09:00:03 PDT FATAL:  the database system is in recovery mode
2016-10-10 09:00:03 PDT LOG:  connection received: host=127.0.0.1 port=35952
2016-10-10 09:00:03 PDT FATAL:  the database system is in recovery mode
2016-10-10 09:00:03 PDT LOG:  connection received: host=127.0.0.1 port=35953
2016-10-10 09:00:03 PDT FATAL:  the database system is in recovery mode
2016-10-10 09:00:03 PDT LOG:  connection received: host=127.0.0.1 port=35954
2016-10-10 09:00:03 PDT FATAL:  the database system is in recovery mode
2016-10-10 

Re: [GENERAL] custom average window function failure

2016-10-10 Thread Merlin Moncure
On Sun, Oct 9, 2016 at 10:10 PM, Sebastian P. Luque  wrote:
> On Sun, 09 Oct 2016 16:00:21 -0400,
> Tom Lane  wrote:
>
>> "Sebastian P. Luque"  writes:
>>> Tom Lane  wrote:
 On closer inspection, the error is only in the
 aggregate-used-as-window-function case, not plain aggregation.
>
>>> Yes, I see the same phenomenon.  Could someone suggest a workaround
>>> until this is fixed?  I'm under the gun to submit output tables and
>>> the only thing I can think of is a crawling slow loop to step through
>>> each window twice: once using the plain aggregation and another
>>> without just get all rows.  I highly doubt it will be worthwhile,
>>> given it's going to be about 1000 iterations, and each one would take
>>> about 30-45 min...
>
>> Are you in a position to apply patches?  It's a one-line fix:
>> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=dca25c2562199ce1e7e26367613912a8eadbbde8
>
>> Alternatively, the problem doesn't manifest when the aggregate
>> transtype and output type are the same, so you could probably refactor
>> your code to use plain array_agg and apply the finalfunc separately in
>> the SQL query.
>
> Perfect, I'll try the latter option on this one.  Thanks so much to both
> of you for your prompt feedback!

Aside: nice use of custom aggregates through window functions.  I use
this tactic heavily.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql 9.3 automatic recovery in progress

2016-10-10 Thread Periko Support
For the life time in odoo session, can u point me where I can manage that
setting?

The configuration /etc/openerp-server.conf doesn't have any parameter for
that.

That must be in a odoo file...?

Thanks.

On Mon, Oct 10, 2016 at 12:25 PM, Pavel Stehule 
wrote:

>
>
> 2016-10-10 21:12 GMT+02:00 Periko Support :
>
>> Andreo u got a good observation here.
>>
>> I got a script that run every hour why?
>>
>> Odoo got some issues with IDLE connections, if we don't check our current
>> psql connections after a while the system eat all connections and a lot of
>> them are IDLE and stop answering users, we create a script that runs every
>> hour, this is:
>>
>> """ Script is used to kill database connection which are idle from last
>> 15 minutes """
>> #!/usr/bin/env python
>> import psycopg2
>> import sys
>> import os
>> from os.path import join, expanduser
>> import subprocess, signal, psutil
>> import time
>>
>> def get_conn():
>>
>> conn_string = "host='localhost' dbname='template1' user='openerp'
>> password='s$p_p@r70'"
>>
>> try:
>> # get a connection, if a connect cannot be made an exception will
>> be raised here
>> conn = psycopg2.connect(conn_string)
>> cursor = conn.cursor()
>> #print "successful Connection"
>> return cursor
>> except:
>> exceptionType, exceptionValue, exceptionTraceback = sys.exc_info()
>> sys.exit("Database connection failed!\n ->%s" % (exceptionValue))
>>
>>
>> def get_pid():
>>
>> SQL="select pid, datname, usename from pg_stat_activity where usename
>> = 'openerp' AND query_start < current_timestamp - INTERVAL '15' MINUTE;"
>> cursor = get_conn()
>> cursor.execute(SQL)
>> idle_record = cursor.fetchall()
>> print "---
>> "
>> print "Date:",time.strftime("%d/%m/%Y")
>> print "idle record list: ", idle_record
>> print "---
>> "
>> for pid in idle_record:
>> try:
>> #print "process details",pid
>> #os.system("kill -9 %s" % (int(pid[0]), ))
>> os.kill(int(pid[0]), signal.SIGKILL)
>> except OSError as ex:
>> continue
>>
>> get_pid()
>>
>> I will move this to run not every hour and see the reaction.
>>
>> Is a easy move, about Tim, our current KVM server is good for me, see
>> picture please:
>>
>>
>> free
>>  total   used   free sharedbuffers cached
>> Mem: 181764228  136200312   45563916468  69904 734652
>> -/+ buffers/cache:  135395756   46368472
>> Swap:   261948  0 261948
>>
>> I got other vm but they are on other raid setup.
>>
>> Tim u mention that u recommend reduce memory pressure, u mean to lower
>> down my values like shared_buffers or increase memory?
>>
>
> try to decrease lifetime of odoo sessions - then memory will be returned
> back to system - set limit_memory_soft less in odoo config - I found some
> manuals on net with wrong settings on net.
>
> the odoo sessions should be refreshed more often.
>
> Regards
>
> Pavel
>
>
>
>>
>>
>> Melvin I try that value before but my server cry, I will add more memory
>> in a few weeks.
>>
>> Any comment I will appreciated, thanks.
>>
>> On Mon, Oct 10, 2016 at 11:22 AM, Tom Lane  wrote:
>>
>>> Periko Support  writes:
>>> > My current server has 82GB memory.
>>>
>>> You said this was running inside a VM, though --- maybe the VM is
>>> resource-constrained?
>>>
>>> In any case, turning off memory overcommit would be a good idea if
>>> you're not concerned about running anything but Postgres.
>>>
>>> regards, tom lane
>>>
>>
>>
>


Re: [GENERAL] psql 9.3 automatic recovery in progress

2016-10-10 Thread Periko Support
That script was from a vendor called 'allianzgrp.com'.

Was their solution.

Them I have a lot of work to do here.

On Mon, Oct 10, 2016 at 12:32 PM, Alban Hertroys  wrote:
>
>> On 10 Oct 2016, at 21:28, Alban Hertroys  wrote:
>>
>>
>>> On 10 Oct 2016, at 21:12, Periko Support  wrote:
>>>
>>>for pid in idle_record:
>>>try:
>>> #print "process details",pid
>>> #os.system("kill -9 %s" % (int(pid[0]), ))
>>>os.kill(int(pid[0]), signal.SIGKILL)
>>>except OSError as ex:
>>>continue
>>
>> That query returns PostgreSQL backends and you're sending them SIGKILL. Not 
>> a recommended practice far as I know. Shouldn't you rather be sending those 
>> kill signals to the clients connecting to the db?
>> Worse, apparently at some time in the past (a month ago matching those logs, 
>> perhaps?) it used to send kill -9! That's absolutely a very bad idea.
>>
>> While on the topic, there is a PG function to cancel a backend query from 
>> within PG: https://www.postgresql.org/docs/9.5/static/functions-admin.html
>> I think that's the best way to go about this, and best of all, you can 
>> combine that with your select statement.
>
> Another idea struck me; if that script is under version control, you can 
> check when that change was committed. If it isn't, perhaps you should. My 
> current favourite is Hg (aka Mercurial), which happens to be written in 
> Python, just like your script.
>
> Alban Hertroys
> --
> If you can't see the forest for the trees,
> cut the trees and you'll find there is no forest.
>


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql 9.3 automatic recovery in progress

2016-10-10 Thread Alban Hertroys

> On 10 Oct 2016, at 21:28, Alban Hertroys  wrote:
> 
> 
>> On 10 Oct 2016, at 21:12, Periko Support  wrote:
>> 
>>for pid in idle_record:
>>try:
>> #print "process details",pid
>> #os.system("kill -9 %s" % (int(pid[0]), ))
>>os.kill(int(pid[0]), signal.SIGKILL)
>>except OSError as ex:
>>continue
> 
> That query returns PostgreSQL backends and you're sending them SIGKILL. Not a 
> recommended practice far as I know. Shouldn't you rather be sending those 
> kill signals to the clients connecting to the db?
> Worse, apparently at some time in the past (a month ago matching those logs, 
> perhaps?) it used to send kill -9! That's absolutely a very bad idea.
> 
> While on the topic, there is a PG function to cancel a backend query from 
> within PG: https://www.postgresql.org/docs/9.5/static/functions-admin.html
> I think that's the best way to go about this, and best of all, you can 
> combine that with your select statement.

Another idea struck me; if that script is under version control, you can check 
when that change was committed. If it isn't, perhaps you should. My current 
favourite is Hg (aka Mercurial), which happens to be written in Python, just 
like your script.

Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql 9.3 automatic recovery in progress

2016-10-10 Thread Alban Hertroys

> On 10 Oct 2016, at 21:12, Periko Support  wrote:
> 
> for pid in idle_record:
> try:
> #print "process details",pid
> #os.system("kill -9 %s" % (int(pid[0]), ))
> os.kill(int(pid[0]), signal.SIGKILL)
> except OSError as ex:
> continue

That query returns PostgreSQL backends and you're sending them SIGKILL. Not a 
recommended practice far as I know. Shouldn't you rather be sending those kill 
signals to the clients connecting to the db?
Worse, apparently at some time in the past (a month ago matching those logs, 
perhaps?) it used to send kill -9! That's absolutely a very bad idea.

While on the topic, there is a PG function to cancel a backend query from 
within PG: https://www.postgresql.org/docs/9.5/static/functions-admin.html
I think that's the best way to go about this, and best of all, you can combine 
that with your select statement.

Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql 9.3 automatic recovery in progress

2016-10-10 Thread Pavel Stehule
2016-10-10 21:12 GMT+02:00 Periko Support :

> Andreo u got a good observation here.
>
> I got a script that run every hour why?
>
> Odoo got some issues with IDLE connections, if we don't check our current
> psql connections after a while the system eat all connections and a lot of
> them are IDLE and stop answering users, we create a script that runs every
> hour, this is:
>
> """ Script is used to kill database connection which are idle from last 15
> minutes """
> #!/usr/bin/env python
> import psycopg2
> import sys
> import os
> from os.path import join, expanduser
> import subprocess, signal, psutil
> import time
>
> def get_conn():
>
> conn_string = "host='localhost' dbname='template1' user='openerp'
> password='s$p_p@r70'"
>
> try:
> # get a connection, if a connect cannot be made an exception will
> be raised here
> conn = psycopg2.connect(conn_string)
> cursor = conn.cursor()
> #print "successful Connection"
> return cursor
> except:
> exceptionType, exceptionValue, exceptionTraceback = sys.exc_info()
> sys.exit("Database connection failed!\n ->%s" % (exceptionValue))
>
>
> def get_pid():
>
> SQL="select pid, datname, usename from pg_stat_activity where usename
> = 'openerp' AND query_start < current_timestamp - INTERVAL '15' MINUTE;"
> cursor = get_conn()
> cursor.execute(SQL)
> idle_record = cursor.fetchall()
> print "---
> "
> print "Date:",time.strftime("%d/%m/%Y")
> print "idle record list: ", idle_record
> print "---
> "
> for pid in idle_record:
> try:
> #print "process details",pid
> #os.system("kill -9 %s" % (int(pid[0]), ))
> os.kill(int(pid[0]), signal.SIGKILL)
> except OSError as ex:
> continue
>
> get_pid()
>
> I will move this to run not every hour and see the reaction.
>
> Is a easy move, about Tim, our current KVM server is good for me, see
> picture please:
>
>
> free
>  total   used   free sharedbuffers cached
> Mem: 181764228  136200312   45563916468  69904 734652
> -/+ buffers/cache:  135395756   46368472
> Swap:   261948  0 261948
>
> I got other vm but they are on other raid setup.
>
> Tim u mention that u recommend reduce memory pressure, u mean to lower
> down my values like shared_buffers or increase memory?
>

try to decrease lifetime of odoo sessions - then memory will be returned
back to system - set limit_memory_soft less in odoo config - I found some
manuals on net with wrong settings on net.

the odoo sessions should be refreshed more often.

Regards

Pavel



>
>
> Melvin I try that value before but my server cry, I will add more memory
> in a few weeks.
>
> Any comment I will appreciated, thanks.
>
> On Mon, Oct 10, 2016 at 11:22 AM, Tom Lane  wrote:
>
>> Periko Support  writes:
>> > My current server has 82GB memory.
>>
>> You said this was running inside a VM, though --- maybe the VM is
>> resource-constrained?
>>
>> In any case, turning off memory overcommit would be a good idea if
>> you're not concerned about running anything but Postgres.
>>
>> regards, tom lane
>>
>
>


Re: [SPAM] [GENERAL] psql 9.3 automatic recovery in progress

2016-10-10 Thread Periko Support
I was on vacation, but the issue have the same behavior:

2016-10-10 07:50:09 PDT WARNING:  terminating connection because of
crash of another server process
2016-10-10 07:50:09 PDT DETAIL:  The postmaster has commanded this
server process to roll back the current transaction and exit, because
another server process exited abnormally and possibly corrupted shared
memory.
2016-10-10 07:50:09 PDT HINT:  In a moment you should be able to
reconnect to the database and repeat your command.
2016-10-10 07:50:09 PDT WARNING:  terminating connection because of
crash of another server process
2016-10-10 07:50:09 PDT DETAIL:  The postmaster has commanded this
server process to roll back the current transaction and exit, because
another server process exited abnormally and possibly corrupted shared
memory.
2016-10-10 07:50:09 PDT HINT:  In a moment you should be able to
reconnect to the database and repeat your command.
2016-10-10 07:50:09 PDT LOG:  archiver process (PID 13066) exited with
exit code 1
2016-10-10 07:50:10 PDT LOG:  all server processes terminated; reinitializing
2016-10-10 07:50:10 PDT LOG:  connection received: host=192.168.2.6 port=37700
2016-10-10 07:50:10 PDT LOG:  database system was interrupted; last
known up at 2016-10-10 07:49:15 PDT
2016-10-10 07:50:10 PDT FATAL:  the database system is in recovery mode
2016-10-10 07:50:10 PDT LOG:  connection received: host=192.168.2.6 port=37702
2016-10-10 07:50:10 PDT FATAL:  the database system is in recovery mode
2016-10-10 07:50:15 PDT LOG:  database system was not properly shut
down; automatic recovery in progress
2016-10-10 07:50:15 PDT LOG:  redo starts at 517/C928
2016-10-10 07:50:15 PDT LOG:  unexpected pageaddr 517/7700 in log
segment 0001051700D2, offset 0
2016-10-10 07:50:15 PDT LOG:  redo done at 517/D1C8
2016-10-10 07:50:15 PDT LOG:  last completed transaction was at log
time 2016-10-10 07:49:09.891669-07
2016-10-10 07:50:15 PDT LOG:  connection received: host=192.168.2.6 port=37704
2016-10-10 07:50:15 PDT FATAL:  the database system is in recovery mode
2016-10-10 07:50:15 PDT LOG:  connection received: host=192.168.2.6 port=37706
2016-10-10 07:50:15 PDT FATAL:  the database system is in recovery mode
2016-10-10 07:50:16 PDT LOG:  MultiXact member wraparound protections
are now enabled
2016-10-10 07:50:16 PDT LOG:  database system is ready to accept connections
2016-10-10 07:50:16 PDT LOG:  autovacuum launcher started

2016-10-10 09:00:01 PDT LOG:  archiver process (PID 14004) exited with
exit code 1
2016-10-10 09:00:01 PDT WARNING:  terminating connection because of
crash of another server process
2016-10-10 09:00:01 PDT DETAIL:  The postmaster has commanded this
server process to roll back the current transaction and exit, because
another server process exited abnormally and possibly corrupted shared
memory.
2016-10-10 09:00:01 PDT HINT:  In a moment you should be able to
reconnect to the database and repeat your command.
2016-10-10 09:00:01 PDT WARNING:  terminating connection because of
crash of another server process
2016-10-10 09:00:01 PDT DETAIL:  The postmaster has commanded this
server process to roll back the current transaction and exit, because
another server process exited abnormally and possibly corrupted shared
memory.
2016-10-10 09:00:01 PDT HINT:  In a moment you should be able to
reconnect to the database and repeat your command.
2016-10-10 09:00:01 PDT LOG:  all server processes terminated; reinitializing
2016-10-10 09:00:02 PDT LOG:  database system was interrupted; last
known up at 2016-10-10 08:59:33 PDT
2016-10-10 09:00:03 PDT LOG:  connection received: host=127.0.0.1 port=35950
2016-10-10 09:00:03 PDT FATAL:  the database system is in recovery mode
2016-10-10 09:00:03 PDT LOG:  connection received: host=127.0.0.1 port=35951
2016-10-10 09:00:03 PDT FATAL:  the database system is in recovery mode
2016-10-10 09:00:03 PDT LOG:  connection received: host=127.0.0.1 port=35952
2016-10-10 09:00:03 PDT FATAL:  the database system is in recovery mode
2016-10-10 09:00:03 PDT LOG:  connection received: host=127.0.0.1 port=35953
2016-10-10 09:00:03 PDT FATAL:  the database system is in recovery mode
2016-10-10 09:00:03 PDT LOG:  connection received: host=127.0.0.1 port=35954
2016-10-10 09:00:03 PDT FATAL:  the database system is in recovery mode
2016-10-10 09:00:03 PDT LOG:  connection received: host=127.0.0.1 port=35955
2016-10-10 09:00:03 PDT FATAL:  the database system is in recovery mode
2016-10-10 09:00:05 PDT LOG:  connection received: host=192.168.2.6 port=39380
2016-10-10 09:00:05 PDT FATAL:  the database system is in recovery mode
2016-10-10 09:00:05 PDT LOG:  connection received: host=192.168.2.6 port=39382
2016-10-10 09:00:05 PDT FATAL:  the database system is in recovery mode
2016-10-10 09:00:07 PDT LOG:  database system was not properly shut
down; automatic recovery in progress
2016-10-10 09:00:07 PDT LOG:  redo starts at 51A/8228
2016-10-10 09:00:07 PDT LOG:  record with zero length at 

Re: [GENERAL] HA Cluster Solution?

2016-10-10 Thread Periko Support
I'm trying to get better numbers, is a option in the table.
Meanwhile I reading some system performance numbers.
Yes odoo is strange sometimes.
But a cluster will be good for HA.
Thanks.

On Mon, Oct 10, 2016 at 11:12 AM, Pavel Stehule  wrote:
> Hi
>
> 2016-10-10 17:20 GMT+02:00 Periko Support :
>>
>> I have done some tuning for psql base on odoo, but I want to know if a
>> cluster can help to get a better performance, this why I ask here in
>> the community who has experience with clusters.
>> Appreciate your help Pavel.
>
>
> It is hard to say if cluster helps - depends on data size - but odoo
> produces really strange queries - I am little bit sceptic. But I am sure, so
> cluster increase significantly maintenance costs.
>
> Regards
>
> Pavel
>
>>
>>
>> On Sun, Oct 9, 2016 at 9:29 PM, Pavel Stehule 
>> wrote:
>> > Hi
>> >
>> > 2016-10-10 6:22 GMT+02:00 Periko Support :
>> >>
>> >>   Hi.
>> >>
>> >>   We are searching for a cluster solutions for postgresql, we need to
>> >> increase our current psql server performance running under ubuntu 14
>> >> v9.3.
>> >>
>> >>The db is for odoo 7.x
>> >
>> >
>> > I have some experience with odoo 7.x - there are lot of pretty slow
>> > queries
>> > without good possibility of optimization. More - there are some articles
>> > on
>> > net that advices suboptimal configuration of Postgres, of odoo.
>> >
>> > If you can, you can try to upgrade to PostgreSQL 9.6 - more CPU per
>> > query
>> > can help, or you can upgrade to new odoo (I have not real experience,
>> > but
>> > the web speak about lot of optimization).
>> >
>> > If you have some own customization, look to slow queries - lot of
>> > performance issues can be fixed.
>> >
>> > Regards
>> >
>> > Pavel
>> >
>> >
>> >
>> >>
>> >>
>> >>Reading on psql doc cluster, exist different solutions like
>> >> postgresql-xl, pgpool which are more for clustering.
>> >>
>> >> In your experience what psql tools for clustering and HA have give
>> >> u good performance.
>> >>
>> >> Any comment are welcome, thanks for your time!!!
>> >>
>> >>
>> >> --
>> >> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> >> To make changes to your subscription:
>> >> http://www.postgresql.org/mailpref/pgsql-general
>> >
>> >
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>
>


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql 9.3 automatic recovery in progress

2016-10-10 Thread Periko Support
Andreo u got a good observation here.

I got a script that run every hour why?

Odoo got some issues with IDLE connections, if we don't check our current
psql connections after a while the system eat all connections and a lot of
them are IDLE and stop answering users, we create a script that runs every
hour, this is:

""" Script is used to kill database connection which are idle from last 15
minutes """
#!/usr/bin/env python
import psycopg2
import sys
import os
from os.path import join, expanduser
import subprocess, signal, psutil
import time

def get_conn():

conn_string = "host='localhost' dbname='template1' user='openerp'
password='s$p_p@r70'"

try:
# get a connection, if a connect cannot be made an exception will
be raised here
conn = psycopg2.connect(conn_string)
cursor = conn.cursor()
#print "successful Connection"
return cursor
except:
exceptionType, exceptionValue, exceptionTraceback = sys.exc_info()
sys.exit("Database connection failed!\n ->%s" % (exceptionValue))


def get_pid():

SQL="select pid, datname, usename from pg_stat_activity where usename =
'openerp' AND query_start < current_timestamp - INTERVAL '15' MINUTE;"
cursor = get_conn()
cursor.execute(SQL)
idle_record = cursor.fetchall()
print
"---"
print "Date:",time.strftime("%d/%m/%Y")
print "idle record list: ", idle_record
print
"---"
for pid in idle_record:
try:
#print "process details",pid
#os.system("kill -9 %s" % (int(pid[0]), ))
os.kill(int(pid[0]), signal.SIGKILL)
except OSError as ex:
continue

get_pid()

I will move this to run not every hour and see the reaction.

Is a easy move, about Tim, our current KVM server is good for me, see
picture please:


free
 total   used   free sharedbuffers cached
Mem: 181764228  136200312   45563916468  69904 734652
-/+ buffers/cache:  135395756   46368472
Swap:   261948  0 261948

I got other vm but they are on other raid setup.

Tim u mention that u recommend reduce memory pressure, u mean to lower down
my values like shared_buffers or increase memory?

Melvin I try that value before but my server cry, I will add more memory in
a few weeks.

Any comment I will appreciated, thanks.

On Mon, Oct 10, 2016 at 11:22 AM, Tom Lane  wrote:

> Periko Support  writes:
> > My current server has 82GB memory.
>
> You said this was running inside a VM, though --- maybe the VM is
> resource-constrained?
>
> In any case, turning off memory overcommit would be a good idea if
> you're not concerned about running anything but Postgres.
>
> regards, tom lane
>


Re: [SPAM] [GENERAL] psql 9.3 automatic recovery in progress

2016-10-10 Thread Adrian Klaver

On 10/10/2016 11:14 AM, Moreno Andreo wrote:


Il 10/10/2016 18:24, Periko Support ha scritto:

2016-09-12 09:00:01 PDT LOG:  server process (PID 23958) was
terminated by signal 9: Killed



2016-09-12 10:00:01 PDT LOG:  server process (PID 30766) was
terminated by signal 9: Killed



2016-09-12 15:00:01 PDT LOG:  server process (PID 22030) was
terminated by signal 9: Killed



These datetimes could be suspect. Every crash (kill) is done at
"00"minutes and "01" minutes, that makes me ask "Isn't there something
like cron running something that interfere with postgres?"


While we on the subject, the datetimes are almost a month old.

Does that mean this problem was just noticed or are the datetimes wrong?



Cheers,
Moreno.







--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql 9.3 automatic recovery in progress

2016-10-10 Thread Tom Lane
Periko Support  writes:
> My current server has 82GB memory.

You said this was running inside a VM, though --- maybe the VM is
resource-constrained?

In any case, turning off memory overcommit would be a good idea if
you're not concerned about running anything but Postgres.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [SPAM] [GENERAL] psql 9.3 automatic recovery in progress

2016-10-10 Thread Melvin Davidson
On Mon, Oct 10, 2016 at 2:14 PM, Moreno Andreo 
wrote:

>
> Il 10/10/2016 18:24, Periko Support ha scritto:
>
>> 2016-09-12 09:00:01 PDT LOG:  server process (PID 23958) was
>> terminated by signal 9: Killed
>>
>
> 2016-09-12 10:00:01 PDT LOG:  server process (PID 30766) was
>> terminated by signal 9: Killed
>>
>
> 2016-09-12 15:00:01 PDT LOG:  server process (PID 22030) was
>> terminated by signal 9: Killed
>>
>>
>> These datetimes could be suspect. Every crash (kill) is done at
> "00"minutes and "01" minutes, that makes me ask "Isn't there something like
> cron running something that interfere with postgres?"
>
> Cheers,
> Moreno.
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

The general philosphy is to start by setting shared_memory to 1/4 system
memory, so
shared_buffers should be 20480 MB

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [SPAM] [GENERAL] psql 9.3 automatic recovery in progress

2016-10-10 Thread Moreno Andreo


Il 10/10/2016 18:24, Periko Support ha scritto:

2016-09-12 09:00:01 PDT LOG:  server process (PID 23958) was
terminated by signal 9: Killed



2016-09-12 10:00:01 PDT LOG:  server process (PID 30766) was
terminated by signal 9: Killed



2016-09-12 15:00:01 PDT LOG:  server process (PID 22030) was
terminated by signal 9: Killed


These datetimes could be suspect. Every crash (kill) is done at 
"00"minutes and "01" minutes, that makes me ask "Isn't there something 
like cron running something that interfere with postgres?"


Cheers,
Moreno.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] HA Cluster Solution?

2016-10-10 Thread Pavel Stehule
Hi

2016-10-10 17:20 GMT+02:00 Periko Support :

> I have done some tuning for psql base on odoo, but I want to know if a
> cluster can help to get a better performance, this why I ask here in
> the community who has experience with clusters.
> Appreciate your help Pavel.
>

It is hard to say if cluster helps - depends on data size - but odoo
produces really strange queries - I am little bit sceptic. But I am sure,
so cluster increase significantly maintenance costs.

Regards

Pavel


>
> On Sun, Oct 9, 2016 at 9:29 PM, Pavel Stehule 
> wrote:
> > Hi
> >
> > 2016-10-10 6:22 GMT+02:00 Periko Support :
> >>
> >>   Hi.
> >>
> >>   We are searching for a cluster solutions for postgresql, we need to
> >> increase our current psql server performance running under ubuntu 14
> >> v9.3.
> >>
> >>The db is for odoo 7.x
> >
> >
> > I have some experience with odoo 7.x - there are lot of pretty slow
> queries
> > without good possibility of optimization. More - there are some articles
> on
> > net that advices suboptimal configuration of Postgres, of odoo.
> >
> > If you can, you can try to upgrade to PostgreSQL 9.6 - more CPU per query
> > can help, or you can upgrade to new odoo (I have not real experience, but
> > the web speak about lot of optimization).
> >
> > If you have some own customization, look to slow queries - lot of
> > performance issues can be fixed.
> >
> > Regards
> >
> > Pavel
> >
> >
> >
> >>
> >>
> >>Reading on psql doc cluster, exist different solutions like
> >> postgresql-xl, pgpool which are more for clustering.
> >>
> >> In your experience what psql tools for clustering and HA have give
> >> u good performance.
> >>
> >> Any comment are welcome, thanks for your time!!!
> >>
> >>
> >> --
> >> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> >> To make changes to your subscription:
> >> http://www.postgresql.org/mailpref/pgsql-general
> >
> >
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] psql 9.3 automatic recovery in progress

2016-10-10 Thread Periko Support
Or add more memory to my server?

On Mon, Oct 10, 2016 at 11:05 AM, Periko Support
 wrote:
> My current server has 82GB memory.
>
> Default settings but the only parameter I had chance is shared_buffers
> from 128MB to 6G.
>
> This server is dedicated to postgresql+odoo.
>
> Is the only parameter I can thing can reduce my memory utilization?
>
> Thanks Tom.
>
>
> On Mon, Oct 10, 2016 at 10:03 AM, Tom Lane  wrote:
>> Periko Support  writes:
>>> I got some issues with my DB under ubuntu 14.x.
>>> PSQL 9.3, odoo 7.x.
>>
>>> 2016-09-12 09:00:01 PDT LOG:  server process (PID 23958) was
>>> terminated by signal 9: Killed
>>
>> Usually, SIGKILLs coming out of nowhere indicate that the Linux OOM killer
>> has decided to target some database process.  You need to do something to
>> reduce memory pressure and/or disable memory overcommit so that that
>> doesn't happen.
>>
>> regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql 9.3 automatic recovery in progress

2016-10-10 Thread Periko Support
My current server has 82GB memory.

Default settings but the only parameter I had chance is shared_buffers
from 128MB to 6G.

This server is dedicated to postgresql+odoo.

Is the only parameter I can thing can reduce my memory utilization?

Thanks Tom.


On Mon, Oct 10, 2016 at 10:03 AM, Tom Lane  wrote:
> Periko Support  writes:
>> I got some issues with my DB under ubuntu 14.x.
>> PSQL 9.3, odoo 7.x.
>
>> 2016-09-12 09:00:01 PDT LOG:  server process (PID 23958) was
>> terminated by signal 9: Killed
>
> Usually, SIGKILLs coming out of nowhere indicate that the Linux OOM killer
> has decided to target some database process.  You need to do something to
> reduce memory pressure and/or disable memory overcommit so that that
> doesn't happen.
>
> regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql 9.3 automatic recovery in progress

2016-10-10 Thread Tom Lane
Periko Support  writes:
> I got some issues with my DB under ubuntu 14.x.
> PSQL 9.3, odoo 7.x.

> 2016-09-12 09:00:01 PDT LOG:  server process (PID 23958) was
> terminated by signal 9: Killed

Usually, SIGKILLs coming out of nowhere indicate that the Linux OOM killer
has decided to target some database process.  You need to do something to
reduce memory pressure and/or disable memory overcommit so that that
doesn't happen.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql 9.3 automatic recovery in progress

2016-10-10 Thread Periko Support
I want to add my server load normally, please see attachment, thanks.

​


On Mon, Oct 10, 2016 at 9:24 AM, Periko Support 
wrote:

> I got some issues with my DB under ubuntu 14.x.
> PSQL 9.3, odoo 7.x.
>
> This machine is under KVM with centos  6.x
>
> It has a Raid1 with ssd drives only for this vm.
>
> I detect some unexpected shutdows, see this lines:
>
> 2016-09-12 08:59:25 PDT ERROR:  missing FROM-clause entry for table
> "rp" at character 73
> 2016-09-12 08:59:25 PDT STATEMENT:  select
> pp.default_code,pc.product_code,pp.name_template,pc.product_name,rp.name
> from product_product pp inner join product_customer_code pc on
> pc.product_id=pp.id
> 2016-09-12 08:59:26 PDT LOG:  connection received: host=192.168.2.153
> port=59335
> 2016-09-12 08:59:26 PDT LOG:  connection authorized: user=openerp
> database=Mueblex
> 2016-09-12 09:00:01 PDT LOG:  connection received: host=::1 port=43536
> 2016-09-12 09:00:01 PDT LOG:  connection authorized: user=openerp
> database=template1
> 2016-09-12 09:00:01 PDT LOG:  server process (PID 23958) was
> terminated by signal 9: Killed
> 2016-09-12 09:00:01 PDT DETAIL:  Failed process was running: select
> pp.default_code,pc.product_code,pp.name_template,pc.product_name,rp.name
> from product_product pp inner join product_customer_code pc on
> pc.product_id=pp.id
> 2016-09-12 09:00:01 PDT LOG:  terminating any other active server processes
> 2016-09-12 09:00:01 PDT WARNING:  terminating connection because of
> crash of another server process
> 2016-09-12 09:00:01 PDT DETAIL:  The postmaster has commanded this
> server process to roll back the current transaction and exit, because
> another server process exited abnormally and possibly corrupted shared
> memory.
> 2016-09-12 09:00:01 PDT HINT:  In a moment you should be able to
> reconnect to the database and repeat your command.
> 2016-09-12 09:00:01 PDT WARNING:  terminating connection because of
> crash of another server process
> 2016-09-12 09:00:01 PDT DETAIL:  The postmaster has commanded this
> server process to roll back the current transaction and exit, because
> another server process exited abnormally and possibly corrupted shared
> memory.
> 2016-09-12 09:00:01 PDT HINT:  In a moment you should be able to
> reconnect to the database and repeat your command.
> 2016-09-12 09:00:01 PDT WARNING:  terminating connection because of
> crash of another server process
> 2016-09-12 09:00:01 PDT DETAIL:  The postmaster has commanded this
> server process to roll back the current transaction and exit, because
> another server process exited abnormally and possibly corrupted shared
> memory.
> 2016-09-12 09:00:01 PDT HINT:  In a moment you should be able to
> reconnect to the database and repeat your command.
> 2016-09-12 09:00:01 PDT WARNING:  terminating connection because of
> crash of another server process
> 2016-09-12 09:00:01 PDT DETAIL:  The postmaster has commanded this
> server process to roll back the current transaction and exit, because
> another server process exited abnormally and possibly corrupted shared
> memory.
> 2016-09-12 09:00:01 PDT HINT:  In a moment you should be able to
> reconnect to the database and repeat your command.
> 2016-09-12 09:00:01 PDT WARNING:  terminating connection because of
> crash of another server process
> 2016-09-12 09:00:01 PDT DETAIL:  The postmaster has commanded this
> server process to roll back the current transaction and exit, because
> another server process exited abnormally and possibly corrupted shared
> memory.
> 2016-09-12 09:00:01 PDT HINT:  In a moment you should be able to
> reconnect to the database and repeat your command.
> 2016-09-12 09:00:01 PDT WARNING:  terminating connection because of
> crash of another server process
> 2016-09-12 09:00:01 PDT DETAIL:  The postmaster has commanded this
> server process to roll back the current transaction and exit, because
> another server process exited abnormally and possibly corrupted shared
> memory.
> 2016-09-12 09:00:01 PDT HINT:  In a moment you should be able to
> reconnect to the database and repeat your command.
> 2016-09-12 09:00:01 PDT WARNING:  terminating connection because of
> crash of another server process
> 2016-09-12 09:00:01 PDT DETAIL:  The postmaster has commanded this
> server process to roll back the current transaction and exit, because
> another server process exited abnormally and possibly corrupted shared
> memory.
> 2016-09-12 09:00:01 PDT HINT:  In a moment you should be able to
> reconnect to the database and repeat your command.
> 2016-09-12 09:00:01 PDT WARNING:  terminating connection because of
> crash of another server process
> 2016-09-12 09:00:01 PDT DETAIL:  The postmaster has commanded this
> server process to roll back the current transaction and exit, because
> another server process exited abnormally and possibly corrupted shared
> memory.
> 2016-09-12 09:00:01 PDT HINT:  In a moment you should be able to
> reconnect to the database and repeat 

[GENERAL] psql 9.3 automatic recovery in progress

2016-10-10 Thread Periko Support
I got some issues with my DB under ubuntu 14.x.
PSQL 9.3, odoo 7.x.

This machine is under KVM with centos  6.x

It has a Raid1 with ssd drives only for this vm.

I detect some unexpected shutdows, see this lines:

2016-09-12 08:59:25 PDT ERROR:  missing FROM-clause entry for table
"rp" at character 73
2016-09-12 08:59:25 PDT STATEMENT:  select
pp.default_code,pc.product_code,pp.name_template,pc.product_name,rp.name
from product_product pp inner join product_customer_code pc on
pc.product_id=pp.id
2016-09-12 08:59:26 PDT LOG:  connection received: host=192.168.2.153 port=59335
2016-09-12 08:59:26 PDT LOG:  connection authorized: user=openerp
database=Mueblex
2016-09-12 09:00:01 PDT LOG:  connection received: host=::1 port=43536
2016-09-12 09:00:01 PDT LOG:  connection authorized: user=openerp
database=template1
2016-09-12 09:00:01 PDT LOG:  server process (PID 23958) was
terminated by signal 9: Killed
2016-09-12 09:00:01 PDT DETAIL:  Failed process was running: select
pp.default_code,pc.product_code,pp.name_template,pc.product_name,rp.name
from product_product pp inner join product_customer_code pc on
pc.product_id=pp.id
2016-09-12 09:00:01 PDT LOG:  terminating any other active server processes
2016-09-12 09:00:01 PDT WARNING:  terminating connection because of
crash of another server process
2016-09-12 09:00:01 PDT DETAIL:  The postmaster has commanded this
server process to roll back the current transaction and exit, because
another server process exited abnormally and possibly corrupted shared
memory.
2016-09-12 09:00:01 PDT HINT:  In a moment you should be able to
reconnect to the database and repeat your command.
2016-09-12 09:00:01 PDT WARNING:  terminating connection because of
crash of another server process
2016-09-12 09:00:01 PDT DETAIL:  The postmaster has commanded this
server process to roll back the current transaction and exit, because
another server process exited abnormally and possibly corrupted shared
memory.
2016-09-12 09:00:01 PDT HINT:  In a moment you should be able to
reconnect to the database and repeat your command.
2016-09-12 09:00:01 PDT WARNING:  terminating connection because of
crash of another server process
2016-09-12 09:00:01 PDT DETAIL:  The postmaster has commanded this
server process to roll back the current transaction and exit, because
another server process exited abnormally and possibly corrupted shared
memory.
2016-09-12 09:00:01 PDT HINT:  In a moment you should be able to
reconnect to the database and repeat your command.
2016-09-12 09:00:01 PDT WARNING:  terminating connection because of
crash of another server process
2016-09-12 09:00:01 PDT DETAIL:  The postmaster has commanded this
server process to roll back the current transaction and exit, because
another server process exited abnormally and possibly corrupted shared
memory.
2016-09-12 09:00:01 PDT HINT:  In a moment you should be able to
reconnect to the database and repeat your command.
2016-09-12 09:00:01 PDT WARNING:  terminating connection because of
crash of another server process
2016-09-12 09:00:01 PDT DETAIL:  The postmaster has commanded this
server process to roll back the current transaction and exit, because
another server process exited abnormally and possibly corrupted shared
memory.
2016-09-12 09:00:01 PDT HINT:  In a moment you should be able to
reconnect to the database and repeat your command.
2016-09-12 09:00:01 PDT WARNING:  terminating connection because of
crash of another server process
2016-09-12 09:00:01 PDT DETAIL:  The postmaster has commanded this
server process to roll back the current transaction and exit, because
another server process exited abnormally and possibly corrupted shared
memory.
2016-09-12 09:00:01 PDT HINT:  In a moment you should be able to
reconnect to the database and repeat your command.
2016-09-12 09:00:01 PDT WARNING:  terminating connection because of
crash of another server process
2016-09-12 09:00:01 PDT DETAIL:  The postmaster has commanded this
server process to roll back the current transaction and exit, because
another server process exited abnormally and possibly corrupted shared
memory.
2016-09-12 09:00:01 PDT HINT:  In a moment you should be able to
reconnect to the database and repeat your command.
2016-09-12 09:00:01 PDT WARNING:  terminating connection because of
crash of another server process
2016-09-12 09:00:01 PDT DETAIL:  The postmaster has commanded this
server process to roll back the current transaction and exit, because
another server process exited abnormally and possibly corrupted shared
memory.
2016-09-12 09:00:01 PDT HINT:  In a moment you should be able to
reconnect to the database and repeat your command.
2016-09-12 09:00:01 PDT WARNING:  terminating connection because of
crash of another server process
2016-09-12 09:00:01 PDT DETAIL:  The postmaster has commanded this
server process to roll back the current transaction and exit, because
another server process exited abnormally and possibly corrupted shared
memory.
2016-09-12 

Re: [GENERAL] HA Cluster Solution?

2016-10-10 Thread Periko Support
I have done some tuning for psql base on odoo, but I want to know if a
cluster can help to get a better performance, this why I ask here in
the community who has experience with clusters.
Appreciate your help Pavel.

On Sun, Oct 9, 2016 at 9:29 PM, Pavel Stehule  wrote:
> Hi
>
> 2016-10-10 6:22 GMT+02:00 Periko Support :
>>
>>   Hi.
>>
>>   We are searching for a cluster solutions for postgresql, we need to
>> increase our current psql server performance running under ubuntu 14
>> v9.3.
>>
>>The db is for odoo 7.x
>
>
> I have some experience with odoo 7.x - there are lot of pretty slow queries
> without good possibility of optimization. More - there are some articles on
> net that advices suboptimal configuration of Postgres, of odoo.
>
> If you can, you can try to upgrade to PostgreSQL 9.6 - more CPU per query
> can help, or you can upgrade to new odoo (I have not real experience, but
> the web speak about lot of optimization).
>
> If you have some own customization, look to slow queries - lot of
> performance issues can be fixed.
>
> Regards
>
> Pavel
>
>
>
>>
>>
>>Reading on psql doc cluster, exist different solutions like
>> postgresql-xl, pgpool which are more for clustering.
>>
>> In your experience what psql tools for clustering and HA have give
>> u good performance.
>>
>> Any comment are welcome, thanks for your time!!!
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>
>


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Transactional-DDL DROP/CREATE TABLE

2016-10-10 Thread Merlin Moncure
On Thu, Oct 6, 2016 at 4:21 AM, Geoff Winkless  wrote:
> Hi
>
> I have code that does (inside a single transaction)
>
> DROP TABLE IF EXISTS mytable; CREATE TABLE mytable 
>
> Occasionally this produces
>
> ERROR: duplicate key value violates unique constraint
> "pg_type_typname_nsp_index" DETAIL: Key (typname,
> typnamespace)=(mytable, 2200) already exists.
>
> I can get away from this by using CREATE TABLE IF NOT EXISTS in the
> same code, but there's the potential that the wrong data will end up
> in the table if that happens, and it also seems a little odd.
>
> Would you not expect this transaction to be atomic? ie at commit time,
> the transaction should drop any table with the same name that has been
> created by another transaction.

Transactions do not guarantee atomicity in the sense that you mean.
MVCC rules (which DDL generally fall under) try to interleave work as
much as possible which is the problem you're facing.   What you want
is fully serialized creation which can be accomplished with advisory
lock or (better, imo) a leading

LOCK TABLE mytable;

Also, this is not a good pattern.  You ought to be using temp tables
or other mechanics to store transaction local data.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How pg_dump works

2016-10-10 Thread Moreno Andreo

Il 10/10/2016 14:16, haman...@t-online.de ha scritto:

Hi all,
  I need to pg_dump a database to another server.
The particularity is that the database is bigger than remaining space on
disk. Obviously, this is not a problem because i'm dumping to another
host, but I need to know if the procedure streams data to remote host or
the data itself is packed up in temporary file(s) that are sent to
remote host.
The former would be such a problem, because there are good chances I'll
have my disk filled up...

I've not found details on this in documentation.

Thanks
Moreno.


Hi Moreno,

it is one big stream.

OK, that's good news :-)

Thanks
Moreno.

I am sending compressed pg_dump and can actually use
zcat dumpfile.gz | psql mydatabase
on the other end

Regards
Wolfgang









--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How pg_dump works

2016-10-10 Thread Moreno Andreo

Il 10/10/2016 14:16, Achilleas Mantzios ha scritto:

On 10/10/2016 14:50, Moreno Andreo wrote:

Hi all,
I need to pg_dump a database to another server.
The particularity is that the database is bigger than remaining space 
on disk. Obviously, this is not a problem because i'm dumping to 
another host, but I need to know if the procedure streams data to 
remote host or the data itself is packed up in temporary file(s) that 
are sent to remote host.
The former would be such a problem, because there are good chances 
I'll have my disk filled up...


I've not found details on this in documentation.



In fact, in the docs there is all the info you might ever need. 
PostgreSQL project excels on that compared to *many* free-software 
projects. Now what you could do is something like :

createdb -h your_big_server your_database
pg_dump -h your_small_server your_database | psql -h your_big_server 
-f - your_database


Thanks, I was aware of this usage. I just wanted to be sure no 
additional disk space has to be used in "small_server".





Thanks
Moreno.













--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How pg_dump works

2016-10-10 Thread hamann . w
>> Hi all,
>>  I need to pg_dump a database to another server.
>> The particularity is that the database is bigger than remaining space on 
>> disk. Obviously, this is not a problem because i'm dumping to another 
>> host, but I need to know if the procedure streams data to remote host or 
>> the data itself is packed up in temporary file(s) that are sent to 
>> remote host.
>> The former would be such a problem, because there are good chances I'll 
>> have my disk filled up...
>> 
>> I've not found details on this in documentation.
>> 
>> Thanks
>> Moreno.
>> 
Hi Moreno,

it is one big stream.
I am sending compressed pg_dump and can actually use
zcat dumpfile.gz | psql mydatabase
on the other end

Regards
Wolfgang




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How pg_dump works

2016-10-10 Thread Achilleas Mantzios

On 10/10/2016 14:50, Moreno Andreo wrote:

Hi all,
I need to pg_dump a database to another server.
The particularity is that the database is bigger than remaining space on disk. Obviously, this is not a problem because i'm dumping to another host, but I need to know if the procedure streams data 
to remote host or the data itself is packed up in temporary file(s) that are sent to remote host.

The former would be such a problem, because there are good chances I'll have my 
disk filled up...

I've not found details on this in documentation.



In fact, in the docs there is all the info you might ever need. PostgreSQL 
project excels on that compared to *many* free-software projects. Now what you 
could do is something like :
createdb -h your_big_server your_database
pg_dump -h your_small_server your_database | psql -h your_big_server -f - 
your_database


Thanks
Moreno.








--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How pg_dump works

2016-10-10 Thread Moreno Andreo

Hi all,
I need to pg_dump a database to another server.
The particularity is that the database is bigger than remaining space on 
disk. Obviously, this is not a problem because i'm dumping to another 
host, but I need to know if the procedure streams data to remote host or 
the data itself is packed up in temporary file(s) that are sent to 
remote host.
The former would be such a problem, because there are good chances I'll 
have my disk filled up...


I've not found details on this in documentation.

Thanks
Moreno.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Transactional-DDL DROP/CREATE TABLE

2016-10-10 Thread Geoff Winkless
On 6 October 2016 at 18:33, Tom Lane  wrote:
> I'm a bit confused about exactly what the context is here.  AFAICS,
> the fragment you quoted should work as you expect, as long as the
> table always exists beforehand. Then, the DROPs serialize the
> transactions' access to the table and all is well.  On the other hand,
> if the table *doesn't* exist beforehand, there is nothing to serialize
> on and the behavior Adrian exhibited is what I'd expect.

I accept that this is how things are. I'm just surprised that "DROP
TABLE IF EXISTS" doesn't do the exists-test at commit time, rather
than effectively being "DROP TABLE IF
EXISTED-AT-SOME-RANDOM-POINT-IN-THE-PAST".

At the end of the day this isn't a massive deal - I can simply add
exception code around the failure, as Francisco suggested, or add
oplocks around the code (as per Kevin), or (I suppose) I could do an
individual transaction to CREATE TABLE IF NOT EXISTS as a separate
transaction before I start; it's just something that caught me out
because I didn't expect it to be a problem.

Geoff


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Transactional-DDL DROP/CREATE TABLE

2016-10-10 Thread Geoff Winkless
On 6 October 2016 at 18:25, Adrian Klaver  wrote:
> I do not see sarcasm, I see someone trying to work through what is a complex
> scenario.

When someone talks about things "magically working as you think it
should" I see sarcasm. Perhaps I misread, in which case I apologise.

>> _As far as the transaction is concerned_, after the command the table
>
> What command?

The "DROP TABLE IF EXISTS" command.

>> But that's NOT what this function is for. It's designed to remove a
>> table without producing an error if it didn't exist. The fact that its
>> RETURN value is "DROP TABLE", whether it dropped or not, shows this.
>
> What function?

I'm mixing up terminologies (statement, not function). I'm still
talking about "DROP TABLE IF EXISTS" here.

Geoff


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general